我需要从亚马逊上某个产品上刮取所有评论:
我正在使用Scrapy执行此操作。但是,以下代码似乎并未抓取所有评论,因为它们被拆分为不同的页面。人们应该首先单击所有评论,然后单击下一页。我想知道如何在python中使用scrapy或其他工具来做到这一点。 此产品有5893条评论,我无法手动获取此信息。
当前我的代码如下:
import scrapy
from scrapy.crawler import CrawlerProcess
class My_Spider(scrapy.Spider):
name = 'spid'
start_urls = ['https://www.amazon.com/Cascade-ActionPacs-Dishwasher-Detergent-Packaging/dp/B01NGTV4J5/ref=pd_rhf_cr_s_trq_bnd_0_6/130-6831149-4603948?_encoding=UTF8&pd_rd_i=B01NGTV4J5&pd_rd_r=b6f87690-19d7-4dba-85c0-b8f54076705a&pd_rd_w=AgonG&pd_rd_wg=GG9yY&pf_rd_p=4e0a494a-50c5-45f5-846a-abfb3d21ab34&pf_rd_r=QAD0984X543RFMNNPNF2&psc=1&refRID=QAD0984X543RFMNNPNF2']
def parse(self, response):
for row in response.css('div.review'):
item = {}
item['author'] = row.css('span.a-profile-name::text').extract_first()
rating = row.css('i.review-rating > span::text').extract_first().strip().split(' ')[0]
item['rating'] = int(float(rating.strip().replace(',', '.')))
item['title'] = row.css('span.review-title > span::text').extract_first()
yield item
并执行搜寻器:
process = CrawlerProcess({
})
process.crawl(My_Spider)
process.start()
您能告诉我是否可以转到下一页并刮掉所有评论吗? 这应该是存储评论的页面。
答案 0 :(得分:0)
使用网址https://www.amazon.com/Cascade-ActionPacs-Dishwasher-Detergent-Packaging/product-reviews/B01NGTV4J5/ref=cm_cr_arp_d_paging_btm_next_2?ie=UTF8&reviewerType=all_reviews&pageNumber=<PUT PAGE NUMBER HERE>
,您可以执行以下操作:
import scrapy
from scrapy.crawler import CrawlerProcess
class My_Spider(scrapy.Spider):
name = 'spid'
start_urls = ['https://www.amazon.com/Cascade-ActionPacs-Dishwasher-Detergent-Packaging/product-reviews/B01NGTV4J5/ref=cm_cr_arp_d_paging_btm_next_2?ie=UTF8&reviewerType=all_reviews&pageNumber=1']
def parse(self, response)
for row in response.css('div.review'):
item = {}
item['author'] = row.css('span.a-profile-name::text').extract_first()
rating = row.css('i.review-rating > span::text').extract_first().strip().split(' ')[0]
item['rating'] = int(float(rating.strip().replace(',', '.')))
item['title'] = row.css('span.review-title > span::text').extract_first()
yield item
next_page = response.css('ul.a-pagination > li.a-last > a::attr(href)').get()
yield scrapy.Request(url=next_page))