无法正确收集链接。不断从页面获取部分链接。 如何使解析器正常工作?
import scrapy
class GlobaldriveruSpider(scrapy.Spider):
name = 'globaldriveru'
allowed_domains = ['globaldrive.ru']
start_urls = ['https://globaldrive.ru/moskva/motory/?items_per_page=500']
def parse(self, response):
links = response.xpath('//div[@class="ty-grid-list__item-name"]/a/@href').get()
for link in links:
yield scrapy.Request(response.urljoin(link), callback=self.parse_products, dont_filter=True)
#yield scrapy.Request(link, callback=self.parse_products, dont_filter=True)
def parse_products(self, response):
# for parse_products in response.xpath('//div[contains(@class, "container-fluid products_block_page")]'):
item = dict()
item['title'] = response.xpath('//h1[@class="ty-product-block-title"]/text()').extract_first()
yield item
这是一些输出日志
[]
2019-04-29 16:21:12 [scrapy.core.engine] INFO: Spider opened
2019-04-29 16:21:12 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-04-29 16:21:12 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6024
2019-04-29 16:21:13 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://globaldrive.ru/robots.txt> (referer: None)
2019-04-29 16:21:17 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://globaldrive.ru/moskva/motory/?items_per_page=500> (referer: None)
2019-04-29 16:21:17 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET http://globaldrive.ru/h/> from <GET https://globaldrive.ru/h>
2019-04-29 16:21:17 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET http://globaldrive.ru/-/> from <GET https://globaldrive.ru/->
2019-04-29 16:21:18 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET http://globaldrive.ru/%d0%b9/> from <GET https://globaldrive.ru/%D0%B9>
2019-04-29 16:21:18 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET http://globaldrive.ru/%d1%80/> from <GET https://globaldrive.ru/%D1%80>
答案 0 :(得分:1)
在.get()
函数中将.extract()
替换为parse
,现在您要逐个字母地迭代一个链接,但是只需要提取所有链接即可。
def parse(self, response):
links = response.xpath('//div[@class="ty-grid-list__item-name"]/a/@href').extract() # <- here
for link in links:
yield scrapy.Request(response.urljoin(link), self.parse_products)