Scrapy蜘蛛没有收到spider_idle信号

时间:2017-03-05 14:36:47

标签: python web-scraping scrapy web-crawler scrapy-spider

我有蜘蛛使用meta处理链中的请求,以产生包含来自多个请求的数据的项目。 我用来生成请求的方式是在第一次调用解析函数时启动所有请求,但是,如果我有太多的链接请求不是所有请求都被调度,我最终还是得不到我需要的所有内容

为了解决这个问题,我试图让蜘蛛一次请求5个产品,当蜘蛛空闲时再次请求(通过连接from_crawler中的信号)。 问题是,由于我的代码现在是正确的,所以spider_idle不运行request函数,蜘蛛立即关闭。就好像蜘蛛不会闲着一样。

以下是一些代码:

class ProductSpider(scrapy.Spider):
    def __init__(self, *args, **kwargs):
        super(ProductSpider, self).__init__(*args, **kwargs)
        self.parsed_data = []
        self.header = {}
        f = open('file.csv', 'r')
        f_data = [[x.strip()] for x in f]
        count=1
        first = 'smth'
        for product in f_data:
            if first != '':
                header = product[0].split(';')
                for each in range(len(header[1:])):
                    self.header[header[each+1]] = each+1
                first = ''
            else:
                product = product[0].split(';')
                product.append(count)
                count+=1
                self.parsed_data.append(product)
        f.close()

    @classmethod
    def from_crawler(cls, crawler, *args, **kwargs):
        spider = super(ProductSpider, cls).from_crawler(crawler, *args, **kwargs)
        crawler.signals.connect(spider.request, signal=signals.spider_idle)
        return spider

    name = 'products'
    allowed_domains = [domains]
    handle_httpstatus_list = [400, 404, 403, 503, 504]

    start_urls = [start]

    def next_link(self,response):
        product = response.meta['product']
        there_is_next = False
        for each in range(response.meta['each']+1, len(product)-1):
            if product[each] != '':
                there_is_next = True
                yield scrapy.Request(product[each], callback=response.meta['func_dict'][each], meta={'func_dict': response.meta['func_dict'],'product':product,'each':each,'price_dict':response.meta['price_dict'], 'item':response.meta['item']}, dont_filter=True)
                break
        if not there_is_next:
            item = response.meta['item']
            item['prices'] = response.meta['price_dict']
            yield item

    #[...] chain parsing functions for each request

    def get_products(self):
        products = []
        data = self.parsed_data

        for each in range(5):
            if data:
                products.append(data.pop())
        return products

    def request(self):
        item = Header()
        item['first'] = True
        item['sellers'] = self.header
        yield item

        func_dict = {parsing_functions_for_every_site}

        products = self.get_products()
        if not products:
            return

        for product in products:

            item = Product()

            price_dict = {1:product[1]}
            item['name'] = product[0]
            item['order'] = product[-1]

            for each in range(2, len(product)-1):
                if product[each] != '':
                    #print each, func_dict, product[each]
                    yield scrapy.Request(product[each], callback=func_dict[each], 
                    meta={'func_dict': func_dict,'product':product,
                    'each':each,'price_dict':price_dict, 'item':item})
                    break

        raise DontCloseSpider

 def parse(self, response=None):
        pass

1 个答案:

答案 0 :(得分:4)

我假设您已经证明您已达到request方法,实际问题是此方法不会产生请求(甚至是项目)。

在Scrapy中处理信号时,这是一个常见错误,因为相关方法无法产生项目/请求。绕过这个的方法是使用

请求

request = Request('myurl', callback=self.method_to_parse)
self.crawler.engine.crawl(
    request,
    spider
)
项目

item = MyItem()
self.crawler.engine.scraper._process_spidermw_output(
    item, 
    None, 
    Response(''), 
    spider,
)

此外,spider_idle信号方法需要接收spider参数,因此在您的情况下它应该是:

def request(self, spider):
    ...

它应该有用,但我建议使用更好的方法名称。