Any assitance would be appreciated. Since scrapy is based on twisted, it has an archaeic way to deploy and run spiders. “Twisted Reactor” can only started and stop once and cannot be restarted for starters, which makes it quite a challenge to integrate with modern workflows. For instance, I’m trying to create a prefect task, flow so that I can track the progress of each spider and also run them all at once. Componentizing them is important for the code quality and I can’t just clump them together and run the reactor at last as shown in the docs.
In short: I want to create a function that runs a SINGLE spider from start to finish in an async way that be run/re-run multiple times.
Ideally, I want be able to define two separate components like “run_spider” that runs a single spider which a reusable function that can run multiple times and handles the complete lifecycle of a spider. And another “run_spiders” that can that can run multiple spiders in an async way. Here is the structure I’m looking for:
@task
async def run_spider(spider):
# await spider run
print ("spider done:", "run some tests on output here")
@flow
async def run_spiders_all():
for spider in spiders:
await run_spider(spider)
# or better
tasks = [run_spider(s) for s in spiders]
await asyncio.gather(*tasks)
This is what I got so far and working as twisted reactor has been a real nightmare 🙁
@task
def run_spider(spider_cls):
@defer.inlineCallbacks
def run(reactor):
runner = CrawlerRunner(get_project_settings())
yield runner.crawl(spider_cls)
print("Spider output:", spider_cls.name)
react(run)
@flow
def run_spiders():
run_spider(TheHinduSpider)
run_spider(BusinessTodaySpider)
And no, I do not want to add them all at once and start a reactor. I know it cna be done as mentioed in the docs. The key to what I’m looking for is to be able to rn multiple spiders as async functions. Not even sure if this is possible by design.
1