You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have used scrapy splash for requesting in my crawling service. after amount of time my services usage of ram increase continuesly and after a while they use all ram of a vm. the wierd thing is splash service it self works properly but services which use splash for requests have memory leak. for more detail here is my code snippet and splash config i uses:
code:
I use splash 3.1 as splash image and it is my splash service docker compose:
services:
splash:
image: scrapinghub/splash:3.1
ports:
- "prot:port"
networks:
- net
note that I run my code on a vm in a docker container.
what do you think I should do about. I also aware of memory limit, maxrss and slots for preventing splash use lots of ram but this way causes my crawling service misses bunch of websites. how should I handle It in my code?
The text was updated successfully, but these errors were encountered:
I have used scrapy splash for requesting in my crawling service. after amount of time my services usage of ram increase continuesly and after a while they use all ram of a vm. the wierd thing is splash service it self works properly but services which use splash for requests have memory leak. for more detail here is my code snippet and splash config i uses:
code:
config:
I use splash 3.1 as splash image and it is my splash service docker compose:
note that I run my code on a vm in a docker container.
what do you think I should do about. I also aware of memory limit, maxrss and slots for preventing splash use lots of ram but this way causes my crawling service misses bunch of websites. how should I handle It in my code?
The text was updated successfully, but these errors were encountered: