Scrapy Nameko DependencyProvider不抓取页面

时间:2017-09-03 13:08:57

标签: python scrapy twisted nameko

我正在使用scrapy创建一个示例网络爬虫作为Nameko依赖提供者,但它不会抓取任何页面。以下是代码

import scrapy
from scrapy import crawler
from nameko import extensions
from twisted.internet import reactor


class TestSpider(scrapy.Spider):
    name = 'test_spider'
    result = None

    def parse(self, response):
        TestSpider.result = {
            'heading': response.css('h1::text').extract_first()
        }


class ScrapyDependency(extensions.DependencyProvider):

    def get_dependency(self, worker_ctx):
        return self

    def crawl(self, spider=None):
        spider = TestSpider()
        spider.name = 'test_spider'
        spider.start_urls = ['http://www.example.com']
        self.runner = crawler.CrawlerRunner()
        self.runner.crawl(spider)
        d = self.runner.join()
        d.addBoth(lambda _: reactor.stop())
        reactor.run()
        return spider.result

    def run(self):
        if not reactor.running:
            reactor.run()

这是日志。

Enabled extensions:
['scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
Enabled item pipelines:
[]
Spider opened
Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
Closing spider (finished)
Dumping Scrapy stats:
{'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 9, 3, 12, 41, 40, 126088),
 'log_count/INFO': 7,
 'memusage/max': 59650048,
 'memusage/startup': 59650048,
 'start_time': datetime.datetime(2017, 9, 3, 12, 41, 40, 97747)}
Spider closed (finished)

在日志中,我们可以看到它没有抓取单个页面,希望抓取一个页面。

然而,如果我创建常规CrawlerRunner并抓取页面,我会将预期结果返回为{'heading': 'Example Domain'}。以下是代码:

import scrapy


class TestSpider(scrapy.Spider):
    name = 'test_spider'
    start_urls = ['http://www.example.com']
    result = None

    def parse(self, response):
        TestSpider.result = {'heading': response.css('h1::text').extract_first()}

def crawl():
    runner = crawler.CrawlerRunner()
    runner.crawl(TestSpider)
    d = runner.join()
    d.addBoth(lambda _: reactor.stop())
    reactor.run()

if __name__ == '__main__':
    crawl()

在这个问题上花费了几天的时间,我无法弄清楚使用scrapy crawler作为nameko dependecy提供程序无法抓取页面。请纠正我出错的地方。

1 个答案:

答案 0 :(得分:1)

Tarun的评论是正确的。 Nameko使用Eventlet进行并发,而Scrapy使用Twisted。这些都以类似的方式工作:有一个主线程(Reactor,在Twisted中)调度所有其他工作,作为普通Python线程调度程序的替代。不幸的是,这两个系统无法互操作。

如果您真的想要整合Nameko和Scrapy,最好的办法是使用一个单独的Scrapy流程,就像这些问题的答案一样: