CrawlerRunner无法使用钩针编织爬网页面

时间:2019-01-28 19:32:29

标签: python python-3.x scrapy aws-lambda

我正在尝试使用CrawlerRunner()从脚本启动Scrapy以在AWS Lambda中启动。

我在Stackoverflow中观看了带有钩针编织库的解决方案,但对我而言不起作用。

链接:StackOverflow 1 StackOverflow 2

这是代码:

import scrapy
from scrapy.crawler import CrawlerRunner
from scrapy.utils.project import get_project_settings
from scrapy.utils.log import configure_logging

# From response in Stackoverflow: https://stackoverflow.com/questions/41495052/scrapy-reactor-not-restartable
from crochet import setup
setup()

class QuotesSpider(scrapy.Spider):
    name = "quotes"

    def start_requests(self):
        urls = [
            'http://quotes.toscrape.com/page/1/',
            'http://quotes.toscrape.com/page/2/',
        ]
        for url in urls:
            yield scrapy.Request(url=url, callback=self.parse)

    def parse(self, response):
        page = response.url.split("/")[-2]

        print ('Scrapped page n', page)


    def closed(self, reason):
        print ('Closed Spider: ', reason)


def run_spider():

    configure_logging({'LOG_FORMAT': '%(levelname)s: %(message)s'})

    crawler = CrawlerRunner(get_project_settings())
    crawler.crawl(QuotesSpider)        


run_spider()

当我执行脚本时,它返回以下日志:

INFO: Overridden settings: {}
2019-01-28 16:49:52 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats']
2019-01-28 16:49:52 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-01-28 16:49:52 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-01-28 16:49:52 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2019-01-28 16:49:52 [scrapy.core.engine] INFO: Spider opened
2019-01-28 16:49:52 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-01-28 16:49:52 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023

为什么爬行器不爬行蜘蛛?我在Mac和Python 3.7.1上运行。

有帮助吗??非常感谢您的支持。

2 个答案:

答案 0 :(得分:0)

我不确定您是否已解决此问题,但是无论如何。

如果不使用Crochet,则可以使用Scrapy的{​​{3}}编写刮板,如下所示。

import scrapy
from scrapy.crawler import CrawlerRunner
from twisted.internet import reactor

class QuotesSpider(scrapy.Spider):
    name = 'quotes'
    def start_requests(self):
        urls = [
            'http://quotes.toscrape.com/page/1/',
            'http://quotes.toscrape.com/page/2/',
        ]
        for url in urls:
            yield scrapy.Request(url=url)
    def parse(self, response):
        page = response.url.split('/')[-2]
        print ("Scrapped page n", page)
    def closed(self, reason):
        print ("Closed Spider: ", reason)

def run_spider():
    crawler = CrawlerRunner()
    d = crawler.crawl(QuotesSpider)
    d.addCallback(lambda _: reactor.stop())
    reactor.run()

if __name__ == '__main__':
    run_spider()

如果出于任何原因要与Scrapy的Crochet一起使用CrawlerRunner,则

  1. CrawlerRunner将函数run_spider包裹起来
  2. 并从修饰后的函数run_spider返回Crochet's decorator @wait_for

尝试一下!

from crochet import setup, wait_for
import scrapy
from scrapy.crawler import CrawlerRunner

setup()

class QuotesSpider(scrapy.Spider):
    name = 'quotes'
    def start_requests(self):
        urls = [
            'http://quotes.toscrape.com/page/1/',
            'http://quotes.toscrape.com/page/2/',
        ]
        for url in urls:
            yield scrapy.Request(url=url)
    def parse(self, response):
        page = response.url.split('/')[-2]
        print ("Scrapped page n", page)
    def closed(self, reason):
        print ("Closed Spider: ", reason)

@wait_for(10)
def run_spider():
    crawler = CrawlerRunner()
    d = crawler.crawl(QuotesSpider)
    return d

if __name__ == '__main__':
    run_spider()

答案 1 :(得分:0)

我运行您的代码,可以看到Spider正在运行,但是在parse函数中看不到任何正在打印的内容。

我添加了

==your code end===
time.sleep(10) 

在代码末尾,然后我可以看到打印出的解析功能。

所以这可能是主进程在进入解析之前结束的原因