为我的Scrapy蜘蛛写了一个errback,但追溯也在不断发生,为什么?

时间:2016-08-05 12:24:17

标签: scrapy python-3.5

我正在使用Scrapy 1.1,我在脚本中调用Scrapy。我的蜘蛛启动方法如下所示:

library(ggplot2)
ggplot(dd_m) + 
  geom_bar(aes(x=variable, y=value, fill=Method), 
              stat="identity", # Don't transform the data
              position = "dodge") # Dodge the bars

以下是我的蜘蛛的摘录,其中包含如文档中所述的errback,但它仅在捕获失败时打印。

def run_spider(self):
    runner = CrawlerProcess(get_project_settings())
    spider = SiteSpider()
    configure_logging()
    d = runner.crawl(spider, websites_file=self.raw_data_file)
    d.addBoth(lambda _: reactor.stop())
    reactor.run()

我的问题是我遇到了错误,例如:

class SiteSpider(scrapy.Spider):

name = 'SiteCrawler'

custom_settings = {
    'FEED_FORMAT': 'json',
    'FEED_URI': 'result.json',
}

def __init__(self, websites_file=None, *args, **kwargs):
    super().__init__(*args, **kwargs)
    self.websites_file = websites_file
    print('***********')
    print(self.websites_file)

def start_requests(self):
     .....
            if is_valid_url(website_url):
                yield scrapy.Request(url=website_url, callback=self.parse, errback=self.handle_errors, meta={'url': account_id})

def parse(self, response):
    .....
        yield item

def handle_errors(self, failure):
    if failure.check(HttpError):
        # these exceptions come from HttpError spider middleware
        # you can get the non-200 response
        response = failure.value.response
        print('HttpError on ' + response.url)

    elif failure.check(DNSLookupError):
        # this is the original request
        request = failure.request
        print('DNSLookupError on ' + request.url)

    elif failure.check(TimeoutError, TCPTimedOutError):
        request = failure.request
        print('TimeoutError on ' + request.url)

但也可以获得相同网站的追溯:

TimeoutError on http://www.example.com

写入的异常处理消息和回溯通常可以追溯到相同的网站。在stackoverflow上搜索了很多,在文档和喜欢中,我仍然不知道为什么我会看到回溯。 例如,DNSLookupErrors也会出现这种情况。 对不起,我的Scrapy知识是少年。这是正常行为吗?

另外,我将此添加到我的抓取工具下的settings.py中。其他entires(例如item_pipelines)最准确地工作。

2016-08-05 13:40:55 [scrapy] ERROR: Error downloading <GET http://www.example.com/robots.txt>: TCP connection timed out: 60: Operation timed out.
Traceback (most recent call last):
  File ".../anaconda/lib/python3.5/site-packages/twisted/internet/defer.py", line 1126, in _inlineCallbacks
    result = result.throwExceptionIntoGenerator(g)
  File ".../anaconda/lib/python3.5/site-packages/twisted/python/failure.py", line 389, in throwExceptionIntoGenerator
    return g.throw(self.type, self.value, self.tb)
  File ".../anaconda/lib/python3.5/site-packages/scrapy/core/downloader/middleware.py", line 43, in process_request
    defer.returnValue((yield download_func(request=request,spider=spider)))
twisted.internet.error.TCPTimedOutError: TCP connection timed out: 60: Operation timed out.

但我仍然看到调试消息,不仅仅是警告和上面的所有内容。 (如果将configure_logging()添加到蜘蛛启动中)我在mac os x上从终端运行它。 我很乐意为此提供任何帮助。

1 个答案:

答案 0 :(得分:-1)

在脚本中尝试:

if __name__ == '__main__':
    runner = CrawlerProcess(get_project_settings())
    spider = SiteSpider()
    configure_logging()
    d = runner.crawl(spider, websites_file=self.raw_data_file)
    d.addBoth(lambda _: reactor.stop())
    reactor.run()