Skip to content

Commit d8ecd28

Browse files
authored
Documentation improvements (scrapy#6429)
1 parent 558b1d1 commit d8ecd28

File tree

2 files changed

+10
-15
lines changed

2 files changed

+10
-15
lines changed

Diff for: docs/intro/install.rst

+4-9
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ Note that sometimes this may require solving compilation issues for some Scrapy
3737
dependencies depending on your operating system, so be sure to check the
3838
:ref:`intro-install-platform-notes`.
3939

40-
For more detailed and platform specifics instructions, as well as
40+
For more detailed and platform-specific instructions, as well as
4141
troubleshooting information, read on.
4242

4343

@@ -101,7 +101,7 @@ Windows
101101
-------
102102

103103
Though it's possible to install Scrapy on Windows using pip, we recommend you
104-
to install `Anaconda`_ or `Miniconda`_ and use the package from the
104+
install `Anaconda`_ or `Miniconda`_ and use the package from the
105105
`conda-forge`_ channel, which will avoid most installation issues.
106106

107107
Once you've installed `Anaconda`_ or `Miniconda`_, install Scrapy with::
@@ -141,7 +141,7 @@ But it should support older versions of Ubuntu too, like Ubuntu 14.04,
141141
albeit with potential issues with TLS connections.
142142

143143
**Don't** use the ``python-scrapy`` package provided by Ubuntu, they are
144-
typically too old and slow to catch up with latest Scrapy.
144+
typically too old and slow to catch up with the latest Scrapy release.
145145

146146

147147
To install Scrapy on Ubuntu (or Ubuntu-based) systems, you need to install
@@ -170,7 +170,7 @@ macOS
170170

171171
Building Scrapy's dependencies requires the presence of a C compiler and
172172
development headers. On macOS this is typically provided by Apple’s Xcode
173-
development tools. To install the Xcode command line tools open a terminal
173+
development tools. To install the Xcode command-line tools, open a terminal
174174
window and run::
175175

176176
xcode-select --install
@@ -200,11 +200,6 @@ solutions:
200200

201201
brew install python
202202

203-
* Latest versions of python have ``pip`` bundled with them so you won't need
204-
to install it separately. If this is not the case, upgrade python::
205-
206-
brew update; brew upgrade python
207-
208203
* *(Optional)* :ref:`Install Scrapy inside a Python virtual environment
209204
<intro-using-virtualenv>`.
210205

Diff for: docs/intro/overview.rst

+6-6
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ When you ran the command ``scrapy runspider quotes_spider.py``, Scrapy looked fo
6565
Spider definition inside it and ran it through its crawler engine.
6666

6767
The crawl started by making requests to the URLs defined in the ``start_urls``
68-
attribute (in this case, only the URL for quotes in *humor* category)
68+
attribute (in this case, only the URL for quotes in the *humor* category)
6969
and called the default callback method ``parse``, passing the response object as
7070
an argument. In the ``parse`` callback, we loop through the quote elements
7171
using a CSS Selector, yield a Python dict with the extracted quote text and author,
@@ -83,9 +83,9 @@ While this enables you to do very fast crawls (sending multiple concurrent
8383
requests at the same time, in a fault-tolerant way) Scrapy also gives you
8484
control over the politeness of the crawl through :ref:`a few settings
8585
<topics-settings-ref>`. You can do things like setting a download delay between
86-
each request, limiting amount of concurrent requests per domain or per IP, and
86+
each request, limiting the amount of concurrent requests per domain or per IP, and
8787
even :ref:`using an auto-throttling extension <topics-autothrottle>` that tries
88-
to figure out these automatically.
88+
to figure these settings out automatically.
8989

9090
.. note::
9191

@@ -106,10 +106,10 @@ scraping easy and efficient, such as:
106106

107107
* Built-in support for :ref:`selecting and extracting <topics-selectors>` data
108108
from HTML/XML sources using extended CSS selectors and XPath expressions,
109-
with helper methods to extract using regular expressions.
109+
with helper methods for extraction using regular expressions.
110110

111111
* An :ref:`interactive shell console <topics-shell>` (IPython aware) for trying
112-
out the CSS and XPath expressions to scrape data, very useful when writing or
112+
out the CSS and XPath expressions to scrape data, which is very useful when writing or
113113
debugging your spiders.
114114

115115
* Built-in support for :ref:`generating feed exports <topics-feed-exports>` in
@@ -124,7 +124,7 @@ scraping easy and efficient, such as:
124124
well-defined API (middlewares, :ref:`extensions <topics-extensions>`, and
125125
:ref:`pipelines <topics-item-pipeline>`).
126126

127-
* Wide range of built-in extensions and middlewares for handling:
127+
* A wide range of built-in extensions and middlewares for handling:
128128

129129
- cookies and session handling
130130
- HTTP features like compression, authentication, caching

0 commit comments

Comments
 (0)