使用Scrapy抓取单个链接

时间:2018-12-23 15:17:15

标签: python web-scraping beautifulsoup scrapy

我正在搜寻dior.com的产品。 head / script为我提供了除产品说明以外的所有我需要的字段。要抓取描述,我需要点击链接(以下代码中的url变量)。我熟悉的唯一方法是使用BeautifulSoup。我可以只使用Scrapy解析吗? 谢谢你们。

class DiorSpider(CrawlSpider):
    name = 'dior'
    allowed_domains = ['www.dior.com']
    start_urls = ['https://www.dior.com/en_us/']
    rules = (
        Rule(LinkExtractor(allow=(r'^https?://www.dior.com/en_us/men/clothing/new-arrivals.*',)), callback='parse_file')
    )

    def parse_file(self, response):
        script_text = response.xpath("//script[contains(., 'window.initialState')]").extract_first()
        blocks = extract_blocks(script_text)
        for block in blocks:
            sku = re.compile(r'("sku":)"[a-zA-Z0-9_]*"').finditer(block)
            url = re.compile(r'("productLink":{"uri":)"[^"]*').finditer(block)
            for item in zip(sku, url):
                scraped_info = {
                    'sku': item[0].group(0).split(':')[1].replace('"', ''),
                    'url': 'https://www.dior.com' + item[1].group(0).split(':')[2].replace('"', '')
                }

                yield scraped_info

1 个答案:

答案 0 :(得分:0)

如果您需要从第二个请求中提取其他信息,而不是在第二个请求中提取数据,则应该对包含Request.meta属性中已经提取的信息的URL发出请求。

from scrapy import Request

# …

    def parse_file(self, response):
        # …
        for block in blocks:
            # …
            for item in zip(sku, url):
                # …
                yield Request(url, callback=self.parse_additional_information, meta={'scraped_info': scraped_info}

    def parse_additional_information(self, response):
        scraped_info = response.meta['scraped_info']
        # extract the additional information, add it to scraped_info
        yield scraped_info