Scrapy在预定的时间爬行蜘蛛

时间:2017-11-24 14:25:50

标签: python scrapy web-crawler

我想在预定的时间内多次抓取蜘蛛。第一次爬网完成后,将确定下一个爬网时间。这是我的代码,但代码将在第一个crawler.start()行被阻止:

spidersQ = collections.OrderedDict()

class QuotesSpider(scrapy.Spider):
    name = "quotes"
    global spidersQ
    start_urls = [
        "https://www.amazon.com",
    ]

    def parse(self, response):
        root = lxml.html.fromstring(response.body)
        lxml_result = root.xpath("(//div[contains(@class,'a-section')]/div[contains(@class,'olpOffer')])[1]")

        price = lxml_result[0].text.strip()
        # Now schedule this spider to run again after 5 seconds
        spidersQ[datetime.datetime.now() + datetime.timedelta(seconds=5)] = QuotesSpider


def main():
    process = CrawlerProcess({
        'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
    })

    process.crawl(QuotesSpider)
    process.start(stop_after_crawl=False)  # the script will block here forever

    while True:
        if datetime.datetime.now() > first(spidersQ):
            schedTime, spider = spidersQ.popitem(last=False)
            process.crawl(spider)
            process.start(stop_after_crawl=False)
        else:
            time.sleep(1)

1 个答案:

答案 0 :(得分:0)

您可以尝试使用外部模块计划:

Python job scheduling for humans