Scrapy:存储损坏的外部链接并丢弃其余链接

时间:2015-11-09 14:51:34

标签: python-2.7 scrapy scrapy-spider

我希望Scrapy只存储被破坏的外部链接(响应代码不同于200,301或302),但我坚持这一点,脚本会一直存储输出文件中的每个外部链接。这就是我正在使用的:

@staticmethod
def remote_file_to_array(url):

    return filter(None, urllib2.urlopen(url).read().splitlines())

@staticmethod
def sitemap_to_array(url):
    results = []
    body = urllib2.urlopen(url).read()
    sitemap = Sitemap(body)
    for item in sitemap:
        results.append(item['loc'])
    return results


def start_requests(self):


    target_domain = self.arg_target_domain
    print 'Target domain: ', target_domain


    self.rules = (

        Rule(LinkExtractor(allow_domains=[target_domain], unique=True),
             follow=True),

        Rule(LinkExtractor(unique=True),
             callback='parse_item',
             process_links='clean_links',
             follow=False),
    )
    self._compile_rules()


    start_urls = []
    if self.arg_start_urls.endswith('.xml'):
        print 'Sitemap detected!'
        start_urls = self.sitemap_to_array(self.arg_start_urls)
    elif self.arg_start_urls.endswith('.txt'):
        print 'Remote url list detected!'
        start_urls = self.remote_file_to_array(self.arg_start_urls)
    else: 
        start_urls = [self.arg_start_urls]
    print 'Start url count: ', len(start_urls)
    first_url = start_urls[0]
    print 'First url: ', first_url


    for url in start_urls:


        yield scrapy.Request(url, dont_filter=True)


def clean_links(self, links):
    for link in links:

        link.fragment = ''
        link.url = link.url.split('
        yield link


def parse_item(self, response):
    item = BrokenLinksItem()
    item['url'] = response.url
    item['status'] = response.status
    yield item

2 个答案:

答案 0 :(得分:0)

您需要在create-graph ; creates an empty list is-graph-element: ; check if a list is of the format above (for graph elements) element-contains-node ; check if a graph element contains an atom representing a node (e.g., a) is-member ; check if a [generic] list contains an atom push-unique ; gets a list and an atom and inserts it at the end of the list if it's not already there remove-visited ; gets a graph (list of list), removing all the graph elements containing the specified atom remove-all-visited ; same as above, but we can pass a list of atoms to be removed del-from-list ; remove all occurrences of an atom from a list del-all-from-list ; same as above, but we can pass a list of atoms to be removed first-member ; return the first member of a graph element (e.g., for [a, 1, b], return a third-member ; return third member of a graph element graph-to-list ; receives a graph and returns a flat list of all first and third members listed in order 对象上传递errback参数,该参数与Request类似,但不接受响应状态。

我不确定是否可以使用callback实现这一目标,否则您需要定义自己的行为

答案 1 :(得分:0)

您最好的选择是使用下载中间件来记录所需的响应。

from twisted.internet import defer
from twisted.internet.error import (ConnectError, ConnectionDone, ConnectionLost, ConnectionRefusedError,
                                    DNSLookupError, TCPTimedOutError, TimeoutError,)

class BrokenLinkMiddleware(object):

    ignore_http_status_codes = [200, 301, 302]
    exceptions_to_log = (ConnectError, ConnectionDone, ConnectionLost, ConnectionRefusedError, DNSLookupError, IOError,
                         ResponseFailed, TCPTimedOutError, TimeoutError, defer.TimeoutError)

    def process_response(self, request, response, spider):
        if response.status not in self.ignore_http_status_codes:
            # Do your logging here, response.url will have the url, 
            # response.status will have the status.
        return response

    def process_exception(self, request, exception, spider):
        if isinstance(exception, self.exceptions_to_log):
            # Do your logging here

处理一些可能不会指示链接断开的异常(例如ConnectErrorTimeoutErrorTCPTimedOutError),但您仍可能想要记录它们。