The official docs提供了许多从代码运行scrapy
抓取工具的方法:
import scrapy
from scrapy.crawler import CrawlerProcess
class MySpider(scrapy.Spider):
# Your spider definition
...
process = CrawlerProcess({
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})
process.crawl(MySpider)
process.start() # the script will block here until the crawling is finished
但是所有这些都会阻止脚本,直到爬行结束。在python中以非阻塞,异步方式运行爬虫的最简单方法是什么?
答案 0 :(得分:4)
我尝试了我能找到的每一个解决方案,唯一适用于我的是this。但为了使其与scrapy 1.1rc1
一起使用,我不得不稍微调整一下:
from scrapy.crawler import Crawler
from scrapy import signals
from scrapy.utils.project import get_project_settings
from twisted.internet import reactor
from billiard import Process
class CrawlerScript(Process):
def __init__(self, spider):
Process.__init__(self)
settings = get_project_settings()
self.crawler = Crawler(spider.__class__, settings)
self.crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
self.spider = spider
def run(self):
self.crawler.crawl(self.spider)
reactor.run()
def crawl_async():
spider = MySpider()
crawler = CrawlerScript(spider)
crawler.start()
crawler.join()
所以现在当我呼叫crawl_async
时,它会开始抓取并且不会阻止我当前的线程。我对scrapy
完全陌生,所以可能这不是一个非常好的解决方案,但它对我有用。
我使用了这些版本的库:
cffi==1.5.0
Scrapy==1.1rc1
Twisted==15.5.0
billiard==3.3.0.22
答案 1 :(得分:0)
Netimen的回答是正确的。 process.start()
调用阻止线程的reactor.run()
。只是我不认为有必要继承billiard.Process
。尽管记录不完整,但billiard.Process
确实有一组API可以异步调用另一个函数而无需子类化。
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
from billiard import Process
crawler = CrawlerProcess(get_project_settings())
process = Process(target=crawler.start, stop_after_crawl=False)
def crawl(*args, **kwargs):
crawler.crawl(*args, **kwargs)
process.start()
请注意,如果您没有stop_after_crawl=False
,则在运行抓取工具两次以上时可能会遇到ReactorNotRestartable
异常。