Class Myspider1
#do something....
Class Myspider2
#do something...
以上是我的spider.py文件的架构。我试图先运行Myspider1,然后根据某些条件运行Myspider2倍数。我怎么能这样做?有小费吗?
configure_logging()
runner = CrawlerRunner()
def crawl():
yield runner.crawl(Myspider1,arg.....)
yield runner.crawl(Myspider2,arg.....)
crawl()
reactor.run()
我正在尝试使用这种方式。但不知道如何运行它。我应该在cmd上运行cmd(什么命令?)或者只运行python文件??
非常感谢!!!
答案 0 :(得分:2)
运行python文件
例如:
的 test.py 强>
import scrapy
from twisted.internet import reactor, defer
from scrapy.crawler import CrawlerRunner
from scrapy.utils.log import configure_logging
class MySpider1(scrapy.Spider):
# Your first spider definition
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/"
]
def parse(self, response):
print "first spider"
class MySpider2(scrapy.Spider):
# Your second spider definition
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
]
def parse(self, response):
print "second spider"
configure_logging()
runner = CrawlerRunner()
@defer.inlineCallbacks
def crawl():
yield runner.crawl(MySpider1)
yield runner.crawl(MySpider2)
reactor.stop()
crawl()
reactor.run() # the script will block here until the last crawl call is finished
现在运行 python test.py> output.txt的强>
您可以从output.txt中观察到您的蜘蛛顺序运行。
答案 1 :(得分:0)
您需要使用process.crawl()返回的Deferred
对象,该对象允许您在爬网完成后添加回调。
这是我的代码
def start_sequentially(process: CrawlerProcess, crawlers: list):
print('start crawler {}'.format(crawlers[0].__name__))
deferred = process.crawl(crawlers[0])
if len(crawlers) > 1:
deferred.addCallback(lambda _: start_sequentially(process, crawlers[1:]))
def main():
crawlers = [Crawler1, Crawler2]
process = CrawlerProcess(settings=get_project_settings())
start_sequentially(process, crawlers)
process.start()