为什么抓痒的蜘蛛不去解析方法

时间:2018-10-24 05:39:03

标签: python scrapy

我正在尝试使用python scrapy库抓取网页。

我的代码如下:

class AutoscoutDetailsSpider(scrapy.Spider):
    name = "vehicle details"
    reference_url = ''
    reference = '' 

    def __init__(self, reference_url, reference, *args, **kwargs):
        super(AutoscoutDetailsSpider, self).__init__(*args, **kwargs)
        self.reference_url = reference_url
        self.reference = reference
        destination_url = "https://www.autoscout24.be/nl/aanbod/volkswagen-polo-1-2i-12v-base-birthday-climatronic-benzine-zilver-8913b173-cad5-ec63-e053-e250040a09a8"
        self.start_urls = [destination_url]
        add_logs(self.start_urls)

    def handle_error_response(self):
        add_logs("NOT EXISTS. REFERENCE {} AND REFERENCE URL {}.".format(self.reference, self.reference_url))

    def handle_gone_response(self):
        add_logs("SOLD or NOT AVAILABLE Reference {} and reference_url {} is sold or not available.".format(self.reference, self.reference_url))

    def parse(self, response):
        add_logs("THIS IS RESPONSE {}".format(response))

        if response.status == 404:
            self.handle_error_response()

        if response.status == 410:
            self.handle_gone_response()

        if response.status == 200:
            pass

def start_get_vehicle_job(reference_url, reference):
    try:
        def f(q):
            try:
                runner = crawler.CrawlerRunner()
                deferred = runner.crawl(AutoscoutDetailsSpider, reference_url, reference)
                deferred.addBoth(lambda _: reactor.stop())
                reactor.run()
                q.put(None)
            except Exception as e:
                capture_error(str(e))
                q.put(e)

        q = Queue()
        p = Process(target=f, args=(q,))
        p.start()
        result = q.get()
        p.join()

        if result is not None:
            raise result

        return {"Success.": "The crawler ({0}) is successfully executed.".format(reference_url)}
    except Exception as e:
        capture_error(str(e))
        return {"Failure": "The crawler ({0}) is NOT successfully executed.".format(reference_url)}


def main(params):
    start_get_vehicle_job(params.get('reference_url', None), params.get('reference', None))

因此,首先执行的是main,我主要使用start_get_vehicle_jobreference_url作为参数调用reference。然后start_get_vehicle_job称为刮scrap蜘蛛AutoscoutDetailsSpider

__init__中,添加需要抓取的网址。 reference中的参数reference_url__init__是正确的。 add_logs函数仅向数据库添加一些文本。在我的情况下,add_logs中的__init__添加了正确的网址。 enter image description here

我应该转到parse方法,在那里我要检查响应状态。我在add_logs("THIS IS RESPONSE {}".format(response))方法的顶部添加了parse,但是在日志表中看不到该消息。

当我使用scrapy shell测试此网址时,它工作正常,并且得到response.status 404,这是正确的。

enter image description here

这就像是抓痒的蜘蛛根本没有解析方法。

有什么主意吗?

1 个答案:

答案 0 :(得分:0)

如果其他人遇到相同的问题,解决方案是在蜘蛛的顶部添加handle_httpstatus_list = [404]

class AutoscoutDetailsSpider(scrapy.Spider):
    handle_httpstatus_list = [404] ############# This line was the key
    name = "vehicle details"
    reference_url = ''
    reference = ''

默认情况下,它仅处理200个状态(docs)。要处理其他状态,您需要将它们添加到handle_httpstatus_list