scrapy,为什么scrapy.Request类默认调用parse()方法?

时间:2017-08-01 02:25:15

标签: python scrapy

为什么scrapy.Request类默认调用parse()方法,我不太了解这个过程

scrapy的一部分。请求源代码

class Request(object_ref):

def __init__(self, url, callback=None, method='GET', headers=None, body=None,
             cookies=None, meta=None, encoding='utf-8', priority=0,
             dont_filter=False, errback=None, flags=None):

    self._encoding = encoding  # this one has to be set first
    self.method = str(method).upper()
    self._set_url(url)
    self._set_body(body)
    assert isinstance(priority, int), "Request priority not an integer: %r" % priority
    self.priority = priority

    assert callback or not errback, "Cannot use errback without a callback"
    self.callback = callback
    self.errback = errback

...

但是这个默认的回调是None,所以我很困惑这个

    if "msg" in text_json and text_json["msg"] == "login":
        for url in self.start_urls:
            yield scrapy.Request(url, dont_filter=True, headers=self.headers)

1 个答案:

答案 0 :(得分:0)

这是decided inside the Scrapy core,请参阅此request.callback or spider.parse部分:

def call_spider(self, result, request, spider):
    result.request = request
    dfd = defer_result(result)
    dfd.addCallbacks(request.callback or spider.parse, request.errback)
    return dfd.addCallback(iterate_spider_output)