Scrapy:如何将url_id与抓取的数据一起存储

时间:2019-03-27 09:29:27

标签: python python-3.x scrapy scrapy-pipeline

from scrapy import Spider, Request
from selenium import webdriver

class MySpider(Spider):
    name = "my_spider"

    def __init__(self):
        self.browser = webdriver.Chrome(executable_path='E:/chromedriver')
        self.browser.set_page_load_timeout(100)


    def closed(self,spider):
        print("spider closed")
        self.browser.close()

    def start_requests(self):
        start_urls = []
        with open("target_urls.txt", 'r', encoding='utf-8') as f:
            for line in f:
                url_id, url = line.split('\t\t')
                start_urls.append(url)

        for url in start_urls:
            yield Request(url=url, callback=self.parse)

    def parse(self, response):
        yield {
            'target_url': response.url,
            'comments': response.xpath('//div[@class="comments"]//em//text()').extract()
        }

上面是我的拼写代码。我使用scrapy crawl my_spider -o comments.json来运行搜寻器。

您可能会注意到,对于我的每个url,都有一个唯一的url_id与之关联。如何将每个抓取的结果与url_id进行匹配。理想情况下,我想将url_id存储在comments.json的yield输出结果中。

非常感谢!

2 个答案:

答案 0 :(得分:2)

例如,尝试传递meta参数。我已经对您的代码进行了一些更新:

def start_requests(self):
    with open("target_urls.txt", 'r', encoding='utf-8') as f:
        for line in f:
            url_id, url = line.split('\t\t')
            yield Request(url, self.parse, meta={'url_id': url_id, 'original_url': url})

def parse(self, response):
    yield {
        'target_url': response.meta['original_url'],
        'url_id': response.meta['url_id'],
        'comments': response.xpath('//div[@class="comments"]//em//text()').extract()
    }

答案 1 :(得分:1)

回答问题和评论后,尝试执行以下操作:

from scrapy import Spider, Request
from selenium import webdriver

class MySpider(Spider):
    name = "my_spider"

    def __init__(self):
        self.browser = webdriver.Chrome(executable_path='E:/chromedriver')
        self.browser.set_page_load_timeout(100)


    def closed(self,spider):
        print("spider closed")
        self.browser.close()

    def start_requests(self):

        with open("target_urls.txt", 'r', encoding='utf-8') as f:
            for line in f:
                url_id, url = line.split('\t\t')

                yield Request(url=url, callback=self.parse, meta={'url_id':url_id,'url':url})


    def parse(self, response):
        yield {
        'target_url': response.meta['url'],
        'comments': response.xpath('//div[@class="comments"]//em//text()').extract(),
        'url_id':response.meta['url_id']
    }

如上一个答案所述,您可以使用META(http://scrapingauthority.com/scrapy-meta)在各种方法之间传递参数。