Scrapy按计划进行

时间:2017-05-28 15:14:04

标签: python web-scraping scrapy twisted

让Scrapy按计划运行会让我绕过Twist(ed)。

我认为以下测试代码可以正常工作,但是当第二次触发蜘蛛时出现twisted.internet.error.ReactorNotRestartable错误:

from quotesbot.spiders.quotes import QuotesSpider
import schedule
import time
from scrapy.crawler import CrawlerProcess

def run_spider_script():
    process.crawl(QuotesSpider)
    process.start()


process = CrawlerProcess({
    'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
})


schedule.every(5).seconds.do(run_spider_script)

while True:
    schedule.run_pending()
    time.sleep(1)

我猜测作为CrawlerProcess的一部分,Twisted Reactor被调用以重新启动,当不需要时,程序崩溃。有什么方法可以控制它吗?

同样在这个阶段,如果有一种替代方法可以自动化Scrapy蜘蛛按计划运行,我全都耳朵。我尝试了scrapy.cmdline.execute,但无法循环:

from quotesbot.spiders.quotes import QuotesSpider
from scrapy import cmdline
import schedule
import time
from scrapy.crawler import CrawlerProcess


def run_spider_cmd():
    print("Running spider")
    cmdline.execute("scrapy crawl quotes".split())


process = CrawlerProcess({
    'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
})


schedule.every(5).seconds.do(run_spider_cmd)

while True:
    schedule.run_pending()
    time.sleep(1)

修改

添加代码,使用Twisted task.LoopingCall()每隔几秒运行一次测试蜘蛛。我是否完全以错误的方式安排每天在同一时间运行的蜘蛛?

from twisted.internet import reactor
from twisted.internet import task
from scrapy.crawler import CrawlerRunner
import scrapy

class QuotesSpider(scrapy.Spider):
    name = 'quotes'
    allowed_domains = ['quotes.toscrape.com']
    start_urls = ['http://quotes.toscrape.com/']

    def parse(self, response):

        quotes = response.xpath('//div[@class="quote"]')

        for quote in quotes:

            author = quote.xpath('.//small[@class="author"]/text()').extract_first()
            text = quote.xpath('.//span[@class="text"]/text()').extract_first()

            print(author, text)


def run_crawl():

    runner = CrawlerRunner()
    runner.crawl(QuotesSpider)


l = task.LoopingCall(run_crawl)
l.start(3)

reactor.run()

2 个答案:

答案 0 :(得分:1)

第一个值得注意的声明,通常只有一个 Twisted reactor运行且它不可重启(正如您已经发现的那样)。第二个是应该避免阻塞任务/函数(即time.sleep(n)),并且应该用异步替代品替换(例如' reactor.task.deferLater(n,...)`)。 / p>

要在Twisted项目中有效使用Scrapy,需要使用scrapy.crawler.CrawlerRunner核心API,而不是scrapy.crawler.CrawlerProcess。这两者之间的主要区别在于CrawlerProcess为您运行Twisted' reactor(因此很难重启反应堆),而CrawlerRunner依赖开发人员启动反应堆。以下是使用CrawlerRunner代码的代码:

from twisted.internet import reactor
from quotesbot.spiders.quotes import QuotesSpider
from scrapy.crawler import CrawlerRunner

def run_crawl():
    """
    Run a spider within Twisted. Once it completes,
    wait 5 seconds and run another spider.
    """
    runner = CrawlerRunner({
        'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
        })
    deferred = runner.crawl(QuotesSpider)
    # you can use reactor.callLater or task.deferLater to schedule a function
    deferred.addCallback(reactor.callLater, 5, run_crawl)
    return deferred

run_crawl()
reactor.run()   # you have to run the reactor yourself

答案 1 :(得分:0)

您可以使用apscheduler

pip install apscheduler
# -*- coding: utf-8 -*-
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
from apscheduler.schedulers.twisted import TwistedScheduler

from Demo.spiders.baidu import YourSpider

process = CrawlerProcess(get_project_settings())
scheduler = TwistedScheduler()
scheduler.add_job(process.crawl, 'interval', args=[YourSpider], seconds=10)
scheduler.start()
process.start(False)