从Warc文件读取的Scrapy蜘蛛

时间:2014-11-27 16:00:45

标签: scrapy web-crawler warc

我正在寻找一个Scrapy Spider,它不是获取URL并抓取它们,而是获取WARC文件(最好是来自S3)作为输入并发送到parse方法内容。

我实际上需要跳过所有下载阶段,这意味着从start_requests方法我想返回一个Response,然后发送到parse方法。

这是我到目前为止所做的:

class WarcSpider(Spider):

    name = "warc_spider"

    def start_requests(self):
        f = warc.WARCFile(fileobj=gzip.open("file.war.gz"))
        for record in f:
            if record.type == "response":
                payload = record.payload.read()
                headers, body = payload.split('\r\n\r\n', 1)
                url=record['WARC-Target-URI']
                yield Response(url=url, status=200, body=body, headers=headers)


    def parse(self, response):
        #code that creates item
        pass

关于Scarpy做什么的方法的任何想法?

1 个答案:

答案 0 :(得分:1)

你想要做的是这样的事情:

class DummyMdw(object):

    def process_request(self, request, spider):
        record = request.meta['record']
        payload = record.payload.read()
        headers, body = payload.split('\r\n\r\n', 1)
        url=record['WARC-Target-URI']
        return Response(url=url, status=200, body=body, headers=headers)


class WarcSpider(Spider):

    name = "warc_spider"

    custom_settings = {
            'DOWNLOADER_MIDDLEWARES': {'x.DummyMdw': 1}
            }

    def start_requests(self):
        f = warc.WARCFile(fileobj=gzip.open("file.war.gz"))
        for record in f:
            if record.type == "response":
                yield Request(url, callback=self.parse, meta={'record': record})


    def parse(self, response):
        #code that creates item
        pass