Replies: 3 comments
-
OK it looks like its gone down now and is using only 724MB...now can someone explain what is going on? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Could it be caused by tesseract? I mean: is this happening the same way if you disable OCR? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Another question: if you are using a recent build, could it be caused by semantic text? I just added this recently. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have indexed about 25,000 PDFs so far and my available RAM continues to decline . When I started fscrawler this process was using only 625MB...and it went up to 3.25GB. Obviously it seem to have something to do with logging. I deleted most of the log files and it is now down to 2.5GB. I am using docker for fscrawler and when I ran it thought I set it up so all log files wouldn't be saved inside the container:
docker run -d --env FS_JAVA_OPTS=-DDOC_LEVEL=debug --name fscrawler -v /home/serveracct/logs/log1:/usr/share/fscrawler/logs -v /home/serveracct/logs/log2:/tmp -v ~/.fscrawler:/root/.fscrawler -v /mnt/cloud/cases:/tmp/es:ro dadoonet/fscrawler fscrawler job_name
Here is a description of the process that is taking up increasing memory:
Any ideas how to stop the excessive use of RAM?
And incidentally it may say 794MB of memory above but its 2.37GB:
Scott
Beta Was this translation helpful? Give feedback.
All reactions