You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I run the collector on a large number of files (e.g. 120k, some with high sampling rates like 2000Hz for a full day with up to 400-500 MB), it fills memory and swap and once both is full the script gets killed (supposedly by the OS itself, since it only shows "Killed" on command line and no pythonic MemoryError).
The text was updated successfully, but these errors were encountered:
The number of files that are being processed should not matter (well the names are kept in memory). However, the collector needs to keep all samples of the processed file and its neighbors in memory. With 2000Hz it does not surprise me that your machine runs out of memory (depending on how much you have). I can look in to making some optimizations but the bulk of the memory consumption will be necessary as each sample is 32-bit in the end (excluded whatever overhead is required for the actual computation of all metrics). For now I'd say skip 2000 Hz data..
When I run the collector on a large number of files (e.g. 120k, some with high sampling rates like 2000Hz for a full day with up to 400-500 MB), it fills memory and swap and once both is full the script gets killed (supposedly by the OS itself, since it only shows
"Killed"
on command line and no pythonicMemoryError
).The text was updated successfully, but these errors were encountered: