Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory problems: SplashRequest references keep going up #76

Open
nehakansal opened this issue Jun 13, 2018 · 9 comments
Open

Memory problems: SplashRequest references keep going up #76

nehakansal opened this issue Jun 13, 2018 · 9 comments

Comments

@nehakansal
Copy link

I noticed that the memory usage by the scrapy process gradually keeps going up when I use Undercrawler. And then I used the prefs() function (mentioned in the scrapy docs) to monitor the live-references. I noticed that the SplashRequest references never go down and the oldest in there is from the time when the crawl started. Depending on the site, the references can climb up very fast. Here are some numbers:

(1) Crawling cnn, after 5 minutes
Crawled 65 pages
prefs() results -
BaseSpider 1 oldest: 313s ago
SplashRequest 2120 oldest: 300s ago

(2) Crawling reddit, after 5 minutes
Crawled 156 pages
prefs() results -
BaseSpider 1 oldest: 308s ago
SplashRequest 16100 oldest: 302s ago

So, even though, for both the crawls, the SplashRequest object seems to be a problem, for Reddit for some reason there are too many of them in just 5 minutes.

I would expect the SplashRequest to be released after the request is made for all the crawls. Can someone explain this behavior? And what can be done about it?

Thanks.

@lopuhin
Copy link
Contributor

lopuhin commented Jun 14, 2018

@nehakansal are this references from requests that have already been made? I'm asking because there might be two possible explanations for what you are seeing:

  • this is a bug, a reference for request is kept while it shouldn't be. In this case, the best way to debug is IMO looking at who is referencing this objects (I usually use https://pythonhosted.org/Pympler/muppy.html for that)
  • there is no bug, these are requests that were extracted from the first pages crawled, and they are sitting in the queue waiting to be crawled. In this case, one solution is to use a disk-based queue: https://doc.scrapy.org/en/latest/topics/jobs.html?highlight=JOBDIR#how-to-use-it - IIRC it works fine with undercrawler and we used it. When doing a breadth-first crawl, the number of links in the queue can be really large, so a disk based queue is a must to keep memory usage reasonable.

@nehakansal
Copy link
Author

nehakansal commented Jun 14, 2018

Thanks. I will try using muppy to get more information. Follow-up questions though -

  1. The fact that its breadth-first, the time on the oldest request in the prefs() should at some point change to a more recent time than the time around which crawl was started, correct?
  2. I am using Aquarium with Undercrawler, in that case isn't the request queue maintained by HA Proxy and not by Scrapy? Or is it that both HA Proxy and Scrapy maintain a queue?

@lopuhin
Copy link
Contributor

lopuhin commented Jun 14, 2018

  1. Yes, this is correct. The queue size is much larger for breadth-first crawls than it is for depth-first ones, because the number of links from say level 2 to level 3 can be huge
  2. They both maintain a queue - HAProxy maintains just a small queue of recent requests, while the scrapy spider creates requests for all links as they are extracted, and keeps them until their time to be crawled comes.

@nehakansal
Copy link
Author

nehakansal commented Jun 14, 2018

  1. Okay. And if a SplashRequest has completed, the reference to that should go away ideally even if the links on that page are still being processed, is that right? If yes and if the oldest time stayed same even after lets say an hour into the crawl, would that indicate that its more likely a bug than just the nature of breadth-first?
  2. Got it, thanks.

@lopuhin
Copy link
Contributor

lopuhin commented Jun 14, 2018

And if a SplashRequest has completed, the reference to that should go away ideally even if the links on that page are still being processed, is that right?

Yes, that's correct.

If yes and if the oldest time stayed same even after lets say an hour into the crawl, would that indicate that its more likely a bug than just the nature of breadth-first?

I think it's hard to tell if an hour is enough or not, depending on the site. The only reliable way is to check is to check whether there any any alive requests that have already been crawled.

@nehakansal
Copy link
Author

Thanks. I checked in one of my tests that the oldest url in SplashRequest references was already crawled, but I will check for that again in few more tests. And I will also use muppy soon. Will post my updates here once I do.

@nehakansal
Copy link
Author

nehakansal commented Jun 25, 2018

Hi, here is some data from one of my runs where it seemed like something is wrong. I didnt see this behavior in couple of other runs so its probably not consistent, but don't have enough sample runs to be completely sure. I thought I will at least post what I have and see if you have any thoughts on it. I crawled a site 'jair.org'. And one of the urls that got successfully crawled very early on, stayed the oldest 'SplashRequest' object until I aborted the crawl, after 3 minutes of the successful crawl of the url. I couldn't get muppy to run for some reason. I got the data below using scrapy's trackref tool and Objgraph. Below is the meta and the headers dict data for the url that stayed the oldest url 'https://www.jair.org/index.php/jair/article/view/11210/26421'. Any idea why this SplashRequest object had references even after getting crawled and scraped successfully?

Here's how I got the oldest SplashRequest object

from scrapy.utils.trackref import get_oldest
get_oldest('SplashRequest')

Meta

extracted_at https://www.jair.org/index.php/jair
download_latency 1.491
download_slot www.jair.org
download_timeout 267.0
avoid_dup_content True
_splash_processed True
depth 1
splash {'http_status_from_error_code': True, 'session_id': 'default', 'cache_args': ['lua_source', 'js_source'], '_local_arg_fingerprints': {'lua_source': 'LOCAL+592eab4546bc5ab9e1c078a4a25648b2312d4211', 'js_source': 'LOCAL+e67f9577762ee32610fa2ad1bb02ec896b8791e7'}, 'endpoint': 'execute', 'args': {'images_enabled': False, 'headers': {'User-Agent': 'Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/43.0.2357.130 Chrome/43.0.2357.130 Safari/537.36', 'Referer': 'https://www.jair.org/index.php/jair', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Accept-Language': 'en'}, 'load_args': {'lua_source': 'e6f1a2e0a9bcc4103c6a23139165b6b156d1716e', 'js_source': 'e01e258671da83b6eb7de5306c2b0c96767e36ef'}, 'url': 'https://www.jair.org/index.php/jair/article/view/11210/26421', 'run_hh': True, 'screenshot_width': 400, 'return_png': True, 'cookies': [{'secure': False, 'domain': '.www.jair.org', 'name': 'OJSSID', 'value': 'kgeg04h2mmkia2liinmab12al2', 'httpOnly': True, 'path': '/'}], 'screenshot_height': 400, 'timeout': 262}, 'slot_policy': 'per_domain', 'magic_response': True}
from_search None
ajax_crawlable True

Headers

b'User-Agent' [b'Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/43.0.2357.130 Chrome/43.0.2357.130 Safari/537.36']
b'Accept-Language' [b'en']
b'Accept-Encoding' [b'gzip,deflate']
b'Content-Type' [b'application/json']
b'Accept' [b'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8']

The meta and the headers were same every time. I have attached the objgraph of the oldest request, one from first time it was the oldest url and one from right before I aborted the crawl. And you will notice that the url in the graph for some reason is the referrer url instead of the actual url, don't know why, but I did double check that the object that I used to print the metadata is the same object I passed to the objgraph.

First
jair3-1-objgraph-57uwd1pi

Last
jair3-last-objgraph-vxh1cji6

@lopuhin
Copy link
Contributor

lopuhin commented Jun 26, 2018

@nehakansal I'm not 100% sure, but I would check how long does the parse method for that (or some other) request is executed, in wall time? Because even after request is crawled, we'll be still extracting links from it, and scheduling them to be crawled, which might take a while depending on concurrency settings, splash capacity etc. - while request will still be around attached to response.

I can also suggest a different strategy for debugging it - instead of tracking individual requests, I'd enable disk queues and start a longer crawl, and see if memory is leaking or it's bounded, and then if it's leaking, enter the console after it has leaked enough and see why objects are kept alive. In this case, you'll be sure that this is a real leak.

@nehakansal
Copy link
Author

Thanks, that info helps.

As for the other suggestion - actually that's exactly what I started doing yesterday. I enabled the disk queue yesterday. I need to run more tests to make sure that using the disk queue is working well for me or even if there are some inconsistencies in memory but queue helps enough to get me going for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants