使用Scrapy时获取twisted.defer.CancelledError

时间:2016-03-11 19:33:52

标签: python scrapy twisted

每当我运行scrapy crawl命令时,都会出现以下错误:

2016-03-12 00:16:56 [scrapy] ERROR: Error downloading <GET http://XXXXXXX/rnd/sites/default/files/Agreement%20of%20FFCCA(1).pdf>
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 577, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/handlers/http11.py", line 246, in _cb_bodyready
    raise defer.CancelledError()
CancelledError
2016-03-12 00:16:56 [scrapy] ERROR: Error downloading <GET http://XXXXXX/rnd/sites/default/files/S&P_Chemicals,etc.20150903.doc>
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 577, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/handlers/http11.py", line 246, in _cb_bodyready
    raise defer.CancelledError()
CancelledError

我尝试在互联网上搜索此错误,但没有任何好处。

我的抓取工具代码如下:

import os
import StringIO
import sys
import scrapy
from scrapy.conf import settings
from scrapy.selector import Selector
from scrapy.linkextractors import LinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule

class IntSpider(CrawlSpider):
    name = "intranetspidey"
    allowed_domains = ["*****"]
    start_urls = [
        "******"
    ]
    rules = (
        Rule(LinkExtractor(deny_extensions=["ppt","pptx"],deny=(r'.*\?.*') ),
             follow=True,
             callback='parse_webpage'),
    )


    def get_pdf_text(self, response):
        """ Peek inside PDF to check possible violations.
        @return: PDF content as searcable plain-text string
        """
        try:
                from pyPdf import PdfFileReader
        except ImportError:
                print "Needed: easy_install pyPdf"
                raise 
        stream = StringIO.StringIO(response.body)
        reader = PdfFileReader(stream)
        text = u""

        if reader.getDocumentInfo().title:
                # Title is optional, may be None
                text += reader.getDocumentInfo().title

        for page in reader.pages:
                # XXX: Does handle unicode properly?
                text += page.extractText()

        return text 

    def parse_webpage(self, response):

        ct = response.headers.get("content-type", "").lower()
        if "pdf" in ct or ".pdf" in response.url:
            data = self.get_pdf_text(response)

        elif "html" in ct:
              do something

我刚刚开始使用Scrapy,我非常感谢您的知识渊博的解决方案。

2 个答案:

答案 0 :(得分:0)

你在输出/日志中得到类似这样的行:

Expected response size X larger than download max size Y.

听起来您要求的响应超过1GB。您的错误来自download handler defaults to one gig,但overridden可以很容易地出现:

答案 1 :(得分:0)

啊 - 简单啊! :)

只需打开引发错误的the source code ...似乎页面超过maxsize ...这会导致我们here

所以,问题是你正在尝试获取大型文档。增加设置中的DOWNLOAD_MAXSIZE限制,您应该没问题。

注意:您的性能会受到影响,因为您正在阻止CPU进行PDF解码,而这种情况不会发出进一步的请求。 Scrapy的架构严格来说是单线程的。以下是两个(众多)解决方案:

a)使用file pipeline下载文件,然后使用其他系统对其进行批处理。

b)使用reactor.spawnProcess()并使用单独的进程进行PDF解码。 (see here)。这允许您使用Python或任何其他命令行工具进行PDF解码。