'crawl multiple spiders not working with CrawlerProcess, error encountered
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
import spider1
import spider2
settings = get_project_settings()
process = CrawlerProcess(settings)
process.crawl(spider1)
process.crawl(spider2)
process.start()
I have read in documentation and answers here that this seems to be the way to crawl multiple spiders, but I can't get this working! After spider1 completed, when spider2 starts, an error will be thrown and the whole process stop. The error is something to do with reactor already installed:
twisted.internet.error.ReactorAlreadyInstalledError: reactor already installed
I tried doing:
process.crawl(spider1)
process.start()
process.stop()
process.crawl(spider2)
process.start()
still gives me the same error! using python 3.7.7 and scrapy 2.6.1
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
