请求的错误返回仅返回HttpError,但没有其他错误,应该有

时间:2018-01-23 03:36:31

标签: python scrapy web-crawler twisted

每个人〜我学会使用scrapy.Request() errback 的参数。我按照official demo编写代码,发现只有HttpError

F:\Python_Coding\Scrapy\error_handler>scrapy crawl error_handler0 --nolog
>>>>
<<<<
Got successful response from http://www.httpbin.org/
|-------------------|
<<<<
<twisted.python.failure.Failure scrapy.spidermiddlewares.httperror.HttpError: Ig
noring non-200 response>
>>>
HttpError on http://www.httpbin.org/status/404
|-------------------|
<<<<
<twisted.python.failure.Failure scrapy.spidermiddlewares.httperror.HttpError: Ig
noring non-200 response>
>>>
HttpError on http://www.httpbin.org/status/500
|-------------------|
<<<<
<twisted.python.failure.Failure scrapy.spidermiddlewares.httperror.HttpError: Ig
noring non-200 response>
>>>
HttpError on http://www.httpbin.org:12345/
|-------------------|
<<<<
<twisted.python.failure.Failure scrapy.spidermiddlewares.httperror.HttpError: Ig
noring non-200 response>
>>>
HttpError on http://www.httphttpbinbin.org/
|-------------------|

但是应该有 DNSLookupError TimeoutError 。我想知道 failure.check()是如何工作的,它无法弄清楚DNSLookupError和TimeoutError?
这是我的代码:

# -*- coding: utf-8 -*-
import scrapy
from scrapy.spidermiddlewares.httperror import HttpError
from twisted.internet.error import DNSLookupError
from twisted.internet.error import TimeoutError, TCPTimedOutError

class Error_handler_spider(scrapy.Spider):
    name = 'error_handler0'

    start_urls = [
        "http://www.httpbin.org/",              # HTTP 200 expected
        "http://www.httpbin.org/status/404",    # Not found error
        "http://www.httpbin.org/status/500",    # server issue
        "http://www.httpbin.org:12345/",        # non-responding host, timeout expected
        "http://www.httphttpbinbin.org/",       # DNS error expected
    ]

    def start_requests(self):
            for u in self.start_urls:
                yield scrapy.Request(u, self.parse,
                                        errback=self.handle_error,
                                        dont_filter=True)

    def parse(self, response):
        print('>>>>')
        print('<<<<')
        print('Got successful response from {}'.format(response.url))
        print('|-------------------|')

    def handle_error(self, failure):
        print('<<<<')
        print(repr(failure))
        print('>>>')
        if failure.check(HttpError):
            response = failure.value.response
            print('HttpError on {}'.format(response.url))
            print('|-------------------|')
        elif failure.check(DNSLookupError):
            request = failure.request
            print('DNSLookupError on {}'.format(request.url))
            print('|-------------------|')
        elif failure.check(TimeoutError, TCPTimedOutError):
            request = failure.request
            print('TimeoutError on {}'.format(request.url))
            print('|-------------------|')

提前致谢任何可以提出建议的人:) PS:

Scrapy : 1.5.0
lxml : 4.1.1.0
libxml2 : 2.9.5
cssselect : 1.0.3
parsel : 1.3.1
w3lib : 1.18.0
Twisted : 17.9.0
Python : 3.6.3 (v3.6.3:2c5fed8, Oct 3 2017, 18:11:49) [MSC v.1900 64 bit (AMD64)]
pyOpenSSL : 17.5.0 (OpenSSL 1.1.0g 2 Nov 2017)
cryptography : 2.1.4
Platform : Windows-7-6.1.7601-SP1

没有--nolog:

F:\Python_Coding\Scrapy\error_handler>scrapy crawl error_handler0
2018-01-23 16:31:51 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: error_
ndler)
2018-01-23 16:31:51 [scrapy.utils.log] INFO: Versions: lxml 4.1.1.0, libxml2 2
.5, cssselect 1.0.3, parsel 1.3.1, w3lib 1.18.0, Twisted 17.9.0, Python 3.6.3
3.6.3:2c5fed8, Oct  3 2017, 18:11:49) [MSC v.1900 64 bit (AMD64)], pyOpenSSL 1
5.0 (OpenSSL 1.1.0g  2 Nov 2017), cryptography 2.1.4, Platform Windows-7-6.1.7
1-SP1
2018-01-23 16:31:51 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': '
ror_handler', 'NEWSPIDER_MODULE': 'error_handler.spiders', 'SPIDER_MODULES': [
rror_handler.spiders']}
2018-01-23 16:31:51 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2018-01-23 16:31:51 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-01-23 16:31:51 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-01-23 16:31:51 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-01-23 16:31:51 [scrapy.core.engine] INFO: Spider opened
2018-01-23 16:31:51 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 p
es/min), scraped 0 items (at 0 items/min)
2018-01-23 16:31:51 [scrapy.extensions.telnet] DEBUG: Telnet console listening
n 127.0.0.1:6023
2018-01-23 16:31:52 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.
tpbin.org/> (referer: None)
>>>>
<<<<
Got successful response from http://www.httpbin.org/
|-------------------|
2018-01-23 16:31:52 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://www.
tpbin.org/status/404> (referer: None)
<<<<
<twisted.python.failure.Failure scrapy.spidermiddlewares.httperror.HttpError:
noring non-200 response>
>>>
HttpError on http://www.httpbin.org/status/404
|-------------------|
2018-01-23 16:31:52 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET
tp://www.httphttpbinbin.org/> (failed 1 times): 502 Bad Gateway
2018-01-23 16:31:52 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET
tp://www.httpbin.org:12345/> (failed 1 times): 502 Bad Gateway
2018-01-23 16:31:53 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET
tp://www.httpbin.org:12345/> (failed 2 times): 502 Bad Gateway
2018-01-23 16:31:53 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET
tp://www.httpbin.org/status/500> (failed 1 times): 500 Internal Server Error
2018-01-23 16:31:54 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET
tp://www.httpbin.org/status/500> (failed 2 times): 500 Internal Server Error
2018-01-23 16:31:54 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retryi
 <GET http://www.httpbin.org/status/500> (failed 3 times): 500 Internal Server
rror
2018-01-23 16:31:54 [scrapy.core.engine] DEBUG: Crawled (500) <GET http://www.
tpbin.org/status/500> (referer: None)
<<<<
<twisted.python.failure.Failure scrapy.spidermiddlewares.httperror.HttpError:
noring non-200 response>
>>>
HttpError on http://www.httpbin.org/status/500
|-------------------|
2018-01-23 16:31:54 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retryi
 <GET http://www.httpbin.org:12345/> (failed 3 times): 502 Bad Gateway
2018-01-23 16:31:54 [scrapy.core.engine] DEBUG: Crawled (502) <GET http://www.
tpbin.org:12345/> (referer: None)
<<<<
<twisted.python.failure.Failure scrapy.spidermiddlewares.httperror.HttpError:
noring non-200 response>
>>>
HttpError on http://www.httpbin.org:12345/
|-------------------|
2018-01-23 16:31:55 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET
tp://www.httphttpbinbin.org/> (failed 2 times): 502 Bad Gateway
2018-01-23 16:31:55 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retryi
 <GET http://www.httphttpbinbin.org/> (failed 3 times): 502 Bad Gateway
2018-01-23 16:31:55 [scrapy.core.engine] DEBUG: Crawled (502) <GET http://www.
tphttpbinbin.org/> (referer: None)
<<<<
<twisted.python.failure.Failure scrapy.spidermiddlewares.httperror.HttpError:
noring non-200 response>
>>>
HttpError on http://www.httphttpbinbin.org/
|-------------------|
2018-01-23 16:31:55 [scrapy.core.engine] INFO: Closing spider (finished)
2018-01-23 16:31:55 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 2415,
 'downloader/request_count': 11,
 'downloader/request_method_count/GET': 11,
 'downloader/response_bytes': 15718,
 'downloader/response_count': 11,
 'downloader/response_status_count/200': 1,
 'downloader/response_status_count/404': 1,
 'downloader/response_status_count/500': 3,
 'downloader/response_status_count/502': 6,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2018, 1, 23, 8, 31, 55, 871134),
 'log_count/DEBUG': 15,
 'log_count/INFO': 7,
 'response_received_count': 5,
 'retry/count': 6,
 'retry/max_reached': 3,
 'retry/reason_count/500 Internal Server Error': 2,
 'retry/reason_count/502 Bad Gateway': 4,
 'scheduler/dequeued': 11,
 'scheduler/dequeued/memory': 11,
 'scheduler/enqueued': 11,
 'scheduler/enqueued/memory': 11,
 'start_time': datetime.datetime(2018, 1, 23, 8, 31, 51, 509884)}
2018-01-23 16:31:55 [scrapy.core.engine] INFO: Spider closed (finished)

settings.py:

# -*- coding: utf-8 -*-

# Scrapy settings for error_handler project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'error_handler'

SPIDER_MODULES = ['error_handler.spiders']
NEWSPIDER_MODULE = 'error_handler.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'error_handler (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'error_handler.middlewares.ErrorHandlerSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'error_handler.middlewares.ErrorHandlerDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
#    'error_handler.pipelines.ErrorHandlerPipeline': 300,
#}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

1 个答案:

答案 0 :(得分:0)

即使我无法为您提供详细的说明,我认为这个问题仅限于Windows。在我的Linux机器上(Ubuntu 14.04,Python 3.4.3和Twisted 17.9.0),它的工作原理如示例所示。

比较日志中的错误。你得到:

2018-01-23 16:31:52 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://www.httphttpbinbin.org/> (failed 1 times): 502 Bad Gateway

然而我得到了:

2018-01-23 09:47:09 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://www.httphttpbinbin.org/> (failed 1 times): DNS lookup failed: no results for hostname lookup: www.httphttpbinbin.org.

即,当您收到真正的HTTP错误(以HttpError Scrapy异常的形式)时,我在尝试解析主机名(以Twisted异常的形式)时更早失败。因此,我认为这与Twisted如何工作有关,特别是它如何与底层系统服务互操作。