'Scrapy: How to limit number of urls scraped in SitemapSpider
I'm working on a sitemap spider. This spider gets one sitemap url and scrape all urls in this sitemap. I want to limit the number of urls to 100.
I can't use CLOSESPIDER_PAGECOUNT because I use XML export pipeline.
It seems that when scrapy gets to the pagecount, it stops everything including XML generating. So the XML file is not closed etc. it's invalid.
class MainSpider(SitemapSpider):
name = 'main_spider'
allowed_domains = ['doman.com']
sitemap_urls = ['http://doman.com/sitemap.xml']
def start_requests(self):
for url in self.sitemap_urls:
yield Request(url, self._parse_sitemap)
def parse(self, response):
print u'URL: {}'.format(response.url)
if self._is_product(response):
URL = response.url
ITEM_ID = self._extract_code(response)
...
Do you know what to do?
Solution 1:[1]
If you are using SitemapSpider, you can use sitemap_filter, which is a proper way to filter entries.
limit = 5 # Limit to 5 entries only
count = 0 # Entries counter
def sitemap_filter(self, entries):
for entry in entries:
if self.count >= self.limit:
continue
self.count += 1
yield entry
Solution 2:[2]
Using on return was not enough for me, but you can combine it with the CloseSpider exception :
# To import it :
from scrapy.exceptions import CloseSpider
#Later to use it:
raise CloseSpider('message')
I posted the whole code combining both on stackoverflow here
Solution 3:[3]
Why not have a count property on the spider initialized to 0, and the on your parse method you can
def parse(self, response):
if self.count >= 100:
return
self.count += 1
# do actual parsing here
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Mirza Bilal |
| Solution 2 | Geoffroy de Viaris |
| Solution 3 | omu_negru |
