如何在Scrapy中重试IndexError

时间:2018-10-22 22:48:14

标签: python python-2.7 web-scraping scrapy scrapy-middleware

有时会出现IndexError,因为我仅成功刮取了一半的页面,导致解析逻辑获取IndexError。出现IndexError时如何重试?

理想情况下,它是一种中间件,因此它可以一次处理多个蜘蛛。

2 个答案:

答案 0 :(得分:0)

如果您认为如果遇到错误需要重新加载页面,则可以尝试:

max_retries = 5

def parse(self, response):
    # to avoid getting stuck in a loop only retry x times
    retry_count = response.meta.get('retry_count', 0)

    item = {}
    try:
        item['foo'] = response.xpath()[123]
        ...
    except IndexError as e:
        if retry_count == max_retries:
            print(f'max retries reached for {response.url}: {e}')
            return
        yield Request(
            response.url, 
            dont_filter=True, 
            meta={'retry_count': retry_count+1}
        )

答案 1 :(得分:0)

最后,我使用装饰器,并在装饰器函数中从_retry()调用RetryMiddleware函数。它运作良好。这不是最好的,最好是有一个中间件来处理它。但这总比没有好。

from scrapy.downloadermiddlewares.retry import RetryMiddleware

def handle_exceptions(function):
    def parse_wrapper(spider, response):
        try:
            for result in function(spider, response):
                yield result
        except IndexError as e:
            logging.log(logging.ERROR, "Debug HTML parsing error: %s" % (unicode(response.body, 'utf-8')))
            RM = RetryMiddleware(spider.settings)
            yield RM._retry(response.request, e, spider)
    return parse_wrapper

然后我像这样使用装饰器:

@handle_exceptions
def parse(self, response):