This document describes Scrapy . For development docs, go here.
BeautifulSoup and lxml are libraries for parsing HTML and XML. Scrapy is an application framework for writing web spiders that crawl web sites and extract data from them.
Scrapy provides a built-in mechanism for extracting data (called selectors) but you can easily use BeautifulSoup (or lxml) instead, if you feel more comfortable working with them. After all, they’re just parsing libraries which can be imported and used from any Python code.
In other words, comparing BeautifulSoup (or lxml) to Scrapy is like comparing jinja2 to Django.
Scrapy runs in Python 2.5, 2.6 and 2.7. But it’s recommended you use Python 2.6 or above, since the Python 2.5 standard library has a few bugs in their URL handling libraries. Some of these Python 2.5 bugs not only affect Scrapy but any user code, such as spiders. You can see a list of Python 2.5 bugs that affect Scrapy in the issue tracker.
No, and there are no plans to port Scrapy to Python 3.0 yet. At the moment, Scrapy works with Python 2.5, 2.6 and 2.7.
Probably, but we don’t like that word. We think Django is a great open source project and an example to follow, so we’ve used it as an inspiration for Scrapy.
We believe that, if something is already done well, there’s no need to reinvent it. This concept, besides being one of the foundations for open source and free software, not only applies to software but also to documentation, procedures, policies, etc. So, instead of going through each problem ourselves, we choose to copy ideas from those projects that have already solved them properly, and focus on the real problems we need to solve.
We’d be proud if Scrapy serves as an inspiration for other projects. Feel free to steal from us!
Yes. Support for HTTP proxies is provided (since Scrapy 0.8) through the HTTP Proxy downloader middleware. See HttpProxyMiddleware.
You need to install pywin32 because of this Twisted bug.
See Using FormRequest.from_response() to simulate a user login.
Yes, there’s a setting for that: SCHEDULER_ORDER.
Also, Python has a builtin memory leak issue which is described in Leaks without leaks.
See previous question.
Yes, see HttpAuthMiddleware.
Try changing the default Accept-Language request header by overriding the DEFAULT_REQUEST_HEADERS setting.
Scrapy comes with a built-in, fully functional project to scrape the Google Directory. You can find it in the examples/googledir directory of the Scrapy distribution.
Also, there’s a site for sharing code snippets (spiders, middlewares, extensions) called Scrapy snippets.
Finally, you can find some example code for performing not-so-trivial tasks in the Scrapy Recipes wiki page.
Yes. You can use the runspider command. For example, if you have a spider written in a my_spider.py file you can run it with:
scrapy runspider my_spider.py
See runspider command for more info.
Those messages (logged with DEBUG level) don’t necessarily mean there is a problem, so you may not need to fix them.
Those message are thrown by the Offsite Spider Middleware, which is a spider middleware (enabled by default) whose purpose is to filter out requests to domains outside the ones covered by the spider.
For more info see: OffsiteMiddleware.
It’ll depend on how large your output is. See this warning in JsonItemExporter documentation.
Some signals support returning deferreds from their handlers, others don’t. See the Built-in signals reference to know which ones.
999 is a custom reponse status code used by Yahoo sites to throttle requests. Try slowing down the crawling speed by using a download delay of 2 (or higher) in your spider:
class MySpider(CrawlSpider):
name = 'myspider'
DOWNLOAD_DELAY = 2
# [ ... rest of the spider code ... ]
Or by setting a global download delay in your project with the DOWNLOAD_DELAY setting.
Yes, but you can also use the Scrapy shell which allows you too quickly analyze (and even modify) the response being processed by your spider, which is, quite often, more useful than plain old pdb.set_trace().
For more info see Invoking the shell from spiders to inspect responses.
To dump into a JSON file:
scrapy crawl myspider --set FEED_URI=items.json --set FEED_FORMAT=json
To dump into a CSV file:
scrapy crawl myspider --set FEED_URI=items.csv --set FEED_FORMAT=csv
To dump into a XML file:
scrapy crawl myspider --set FEED_URI=items.xml --set FEED_FORMAT=xml
For more information see Feed exports
The __VIEWSTATE parameter is used in sites built with ASP.NET/VB.NET. For more info on how it works see this page. Also, here’s an example spider which scrapes one of these sites.
Parsing big feeds with XPath selectors can be problematic since they need to build the DOM of the entire feed in memory, and this can be quite slow and consume a lot of memory.
In order to avoid parsing all the entire feed at once in memory, you can use the functions xmliter and csviter from scrapy.utils.iterators module. In fact, this is what the feed spiders (see Spiders) use under the cover.