Random User-Agent middleware for Scrapy scraping framework based on
fake-useragent, which picks up User-Agent
strings
based on usage statistics
from a real world database, but also has the option to configure a generator
of fake UA strings, as a backup, powered by
Faker.
It also has the possibility of extending the capabilities of the middleware, by adding your own providers.
Please see CHANGELOG.
The simplest way is to install it via pip:
pip install scrapy-fake-useragent
Turn off the built-in UserAgentMiddleware
and RetryMiddleware
and add
RandomUserAgentMiddleware
and RetryUserAgentMiddleware
.
In Scrapy >=1.0:
DOWNLOADER_MIDDLEWARES = {
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
'scrapy.downloadermiddlewares.retry.RetryMiddleware': None,
'scrapy_fake_useragent.middleware.RandomUserAgentMiddleware': 400,
'scrapy_fake_useragent.middleware.RetryUserAgentMiddleware': 401,
}
In Scrapy <1.0:
DOWNLOADER_MIDDLEWARES = {
'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': None,
'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': None,
'scrapy_fake_useragent.middleware.RandomUserAgentMiddleware': 400,
'scrapy_fake_useragent.middleware.RetryUserAgentMiddleware': 401,
}
Recommended setting (1.3.0+):
FAKEUSERAGENT_PROVIDERS = [
'scrapy_fake_useragent.providers.FakeUserAgentProvider', # this is the first provider we'll try
'scrapy_fake_useragent.providers.FakerProvider', # if FakeUserAgentProvider fails, we'll use faker to generate a user-agent string for us
'scrapy_fake_useragent.providers.FixedUserAgentProvider', # fall back to USER_AGENT value
]
USER_AGENT = '<your user agent string which you will fall back to if all other providers fail>'
The package comes with a thin abstraction layer of User-Agent providers, which for purposes of backwards compatibility defaults to:
FAKEUSERAGENT_PROVIDERS = [
'scrapy_fake_useragent.providers.FakeUserAgentProvider'
]
The package has also FakerProvider
(powered by Faker library) and FixedUserAgentProvider
implemented and available for use if needed.
Each provider is enabled individually, and used in the order they are defined. In case a provider fails execute (for instance, it can happen to fake-useragent because of it's dependency with an online service), the next one will be used.
Example of what FAKEUSERAGENT_PROVIDERS
setting may look like in your case:
FAKEUSERAGENT_PROVIDERS = [
'scrapy_fake_useragent.providers.FakeUserAgentProvider',
'scrapy_fake_useragent.providers.FakerProvider',
'scrapy_fake_useragent.providers.FixedUserAgentProvider',
'mypackage.providers.CustomProvider'
]
Parameter: FAKE_USERAGENT_RANDOM_UA_TYPE
defaulting to random
.
Other options, as example:
firefox
to mimic only Firefox browsersmsie
to mimic Internet Explorer only- etc.
You can also set the FAKEUSERAGENT_FALLBACK
option, which is a fake-useragent
specific fallback. For example:
FAKEUSERAGENT_FALLBACK = 'Mozilla/5.0 (Android; Mobile; rv:40.0)'
What it does is, if the selected FAKE_USERAGENT_RANDOM_UA_TYPE
fails to retrieve a UA, it will use
the type set in FAKEUSERAGENT_FALLBACK
.
Parameter: FAKER_RANDOM_UA_TYPE
defaulting to user_agent
which is the way of selecting totally random User-Agents values.
Other options, as example:
chrome
firefox
safari
- etc. (please refer to Faker UserAgent provider documentation for the available options)
It also comes with a fixed provider (only provides one user agent), reusing the Scrapy's default USER_AGENT
setting value.
To use with middlewares of random proxy such as scrapy-proxies, you need:
- set
RANDOM_UA_PER_PROXY
to True to allow switch per proxy - set priority of
RandomUserAgentMiddleware
to be greater thanscrapy-proxies
, so that proxy is set before handle UA
The package is under MIT license. Please see LICENSE.