Scrapy,尝试抓取多个页面

时间:2019-12-05 22:34:12

标签: python xpath web-scraping web-crawler

我是新手。在我的第一个项目中,我尝试爬网具有多个页面的网络。我从第一页(index = 0)抓取数据,但无法从以下页面获取数据:

https://www.leroymerlin.es/decoracion-navidena/arboles-navidad?sort=default&gt=4-col&offset=4&index=1

https://www.leroymerlin.es/decoracion-navidena/arboles-navidad?sort=default&gt=4-col&offset=4&index=2

https://www.leroymerlin.es/decoracion-navidena/arboles-navidad?sort=default&gt=4-col&offset=4&index=3

....

我尝试了不同的Rules,但对我来说不起作用。

这是我的代码:

import scrapy
from ..items import myfirstItem
from scrapy.spiders import CrawlSpider, Rule
from scrapy import Request
from scrapy.linkextractors import LinkExtractor
from scrapy.item import Field, Item



class myfirstSpider(CrawlSpider):
name = 'myfirst'

start_urls = ["https://www.leroymerlin.es/decoracion-navidena/arboles-navidad"]
allowed_domains= ["leroymerlin.es"]

rules = (
    Rule(LinkExtractor(allow= (), restrict_xpaths=('//li[@class="next"]/a'))),
    Rule(LinkExtractor(allow= (), restrict_xpaths=('//a[@class="boxCard"]')), callback = 'parse_item', follow = False),
)

def parse_item(self, response):
    items = myfirstItem()

    product_name = response.css ('.titleTechniqueSheet::text').extract()

    items['product_name'] = product_name

    yield items

尽管我已经阅读了成千上万个具有相同问题的帖子,但没有一个对我有用。请帮忙吗?

*编辑:在@Fura的建议下,我为我找到了一个更好的解决方案。看起来是这样:

class myfirstSpider(CrawlSpider):
    name = 'myfirst'

    start_urls = ["https://www.leroymerlin.es/decoracion-navidena/arboles-navidad?index=%s" % (page_number) for page_number in range(1,20)]
    allowed_domains= ["leroymerlin.es"]

    rules = (
        Rule(LinkExtractor(allow= r'/fp',), callback = 'parse_item'),
    )

    def parse_item(self, response):
        items = myfirstItem()

        product_name = response.css ('.titleTechniqueSheet::text').extract()

        items['product_name'] = product_name

        yield items

0 个答案:

没有答案