我想从Python模块开始Scrapy中的爬虫。我想基本上模仿$ scrapy crawl my_crawler -a some_arg=value -L DEBUG
我有以下几点:
我很高兴使用上面指定的scrapy
命令运行我的项目,但是我正在编写集成测试,我想以编程方式运行:
settings.py
中的设置和具有my_crawler
名称属性的抓取工具启动抓取(我可以轻松地从我的测试模块中实例化此类。settings.py
中的规范使用所有管道和中间件。那么,任何人都可以帮助我吗?我在网上看过一些例子,但它们要么是针对多个蜘蛛的黑客,要么是Twisted's
阻挡性质,或者不适用于Scrapy 0.14或更高版本。我只需要一些非常简单的东西。 : - )
答案 0 :(得分:7)
from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy.settings import Settings
from scrapy import log, signals
from testspiders.spiders.followall import FollowAllSpider
spider = FollowAllSpider(domain='scrapinghub.com')
crawler = Crawler(Settings())
crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
reactor.run() # the script will block here until the spider_closed signal was sent
答案 1 :(得分:3)
from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy import log, signals
from testspiders.spiders.followall import FollowAllSpider
from scrapy.utils.project import get_project_settings
spider = FollowAllSpider()
crawler = crawler = Crawler(get_project_settings())
crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start_from_settings(get_project_settings())
reactor.run()