无法获取HTML页面中列表的xpath选择器

时间:2019-04-29 15:48:54

标签: python scrapy

无法获得适当的xpath用于项目列表 保持空列表 选择整个列表时出现问题。

链接:

  

https://globaldrive.ru/moskva/motory/2х-тактный-лодочный-мотор-hangkai-m3.5-hp/

这是我要解析的html代码

<div id="content_features" class="ty-wysiwyg-content content-features">

            <div class="ty-product-feature">
        <span class="ty-product-feature__label">Бренды:</span>


        <div class="ty-product-feature__value">Hangkai</div>
        </div>
                <div class="ty-product-feature">
        <span class="ty-product-feature__label">Вес:</span>


        <div class="ty-product-feature__value">УТОЧНЯЙТЕ У МЕНЕДЖЕРА<span class="ty-product-feature__suffix">кг</span></div>
        </div>

            </div>

我的代码:

for prop in response.xpath('//div[@id="content_features"]'):
    item['properties'].append(
        {
        'name': prop.xpath('normalize-space(./*[@class="ty-product-feature__label"])').extract_first(),
        'value': prop.xpath('normalize-space(./*[@class="ty-product-feature__value"])').extract_first(),
        }
    )

    yield item

完整的解析器:

import scrapy


class GlobaldriveruSpider(scrapy.Spider):
    name = 'globaldriveru'
    allowed_domains = ['globaldrive.ru']
    start_urls = ['https://globaldrive.ru/moskva/motory/?items_per_page=500']

    def parse(self, response):
        links = response.xpath('//div[@class="ty-grid-list__item-name"]/a/@href').extract()
        for link in links:
            yield scrapy.Request(response.urljoin(link), callback=self.parse_products, dont_filter=True)
            #yield scrapy.Request(link, callback=self.parse_products, dont_filter=True)

    def parse_products(self, response):
        for parse_products in response.xpath('//div[contains(@class, "container-fluid  products_block_page")]'):
            item = dict()
            item['title'] = response.xpath('//h1[@class="ty-product-block-title"]/text()').extract_first()
            item['price'] = response.xpath('//meta[@itemprop="price"]/@content').get()
            item['available'] = response.xpath('normalize-space(//span[@id="in_stock_info_5511"])').extract_first()
            item['image'] = response.xpath('//meta[@property="og:image"]/@content').get()
            item['brand'] = response.xpath('normalize-space(//div[contains(@class,"ty-features-list")])').get()
            item['department'] = response.xpath('normalize-space(//a[@class="ty-breadcrumbs__a"][2]/text())').extract()
            item['properties'] = list()
            for prop in response.xpath('//div[@id="content_features"]'):
                item['properties'].append(
                      {
                          'name': prop.xpath('normalize-space(./*[@class="ty-product-feature__label"])').extract_first(),
                          'value': prop.xpath('normalize-space(./*[@class="ty-product-feature__value"])').extract_first(),
                      }
                )

            yield item

1 个答案:

答案 0 :(得分:1)

您的代码几乎正确,只是对属性xpath进行了一些更正。似乎您的主要“产品”循环也没有用,因此我将其删除。检查此代码:

def parse_products(self, response):
    item = dict()
    item['title'] = response.xpath('//h1[@class="ty-product-block-title"]/text()').get()
    item['price'] = response.xpath('//meta[@itemprop="price"]/@content').get()
    item['available'] = response.xpath('normalize-space(//span[@id="in_stock_info_5511"])').get()
    item['image'] = response.xpath('//meta[@property="og:image"]/@content').get()
    item['brand'] = response.xpath('normalize-space(//div[contains(@class,"ty-features-list")])').get()
    item['department'] = response.xpath('normalize-space(//a[@class="ty-breadcrumbs__a"][2]/text())').extract()
    item['properties'] = list()
    for prop in response.xpath('//div[@id="content_features"]/div[@class="ty-product-feature"]'):
        item['properties'].append(
              {
                  'name': prop.xpath('normalize-space(./*[@class="ty-product-feature__label"])').get(),
                  'value': prop.xpath('normalize-space(./*[@class="ty-product-feature__value"])').get(),
              }
        )
    yield item