我正在学习NLP,为此,我正在使用Scrapy抓取Amazon书评。我已经提取了想要的字段,并将其输出为Json文件格式。当将此文件作为df加载时,每个字段都记录为一个列表,而不是单独的每行格式。如何拆分此列表,以便df的每个项目都有一行,而不是所有项目条目都记录在单独的列表中?代码:
all
样本输出:
import scrapy
class ReviewspiderSpider(scrapy.Spider):
name = 'reviewspider'
allowed_domains = ['amazon.co.uk']
start_urls = ['https://www.amazon.com/Gone-Girl-Gillian-Flynn/product-reviews/0307588378/ref=cm_cr_othr_d_paging_btm_1?ie=UTF8&reviewerType=all_reviews&pageNumber=1']
def parse(self, response):
users = response.xpath('//a[contains(@data-hook, "review-author")]/text()').extract()
titles = response.xpath('//a[contains(@data-hook, "review-title")]/text()').extract()
dates = response.xpath('//span[contains(@data-hook, "review-date")]/text()').extract()
found_helpful = response.xpath('//span[contains(@data-hook, "helpful-vote-statement")]/text()').extract()
rating = response.xpath('//i[contains(@data-hook, "review-star-rating")]/span[contains(@class, "a-icon-alt")]/text()').extract()
content = response.xpath('//span[contains(@data-hook, "review-body")]/text()').extract()
yield {
'users' : users.extract(),
'titles' : titles.extract(),
'dates' : dates.extract(),
'found_helpful' : found_helpful.extract(),
'rating' : rating.extract(),
'content' : content.extract()
}
所需的输出:
users = ['Lauren', 'James'...'John']
dates = ['on September 28, 2017', 'on December 26, 2017'...'on November 17, 2016']
rating = ['5.0 out of 5 stars', '2.0 out of 5 stars'...'5.0 out of 5 stars']
我知道应该编辑与蜘蛛相关的管道来实现此目的,但是我对Python的了解有限,无法理解Scrapy文档。我也尝试过here和here的解决方案,但是我还不知道能够用我自己的代码来合并答案。任何帮助将不胜感激。
答案 0 :(得分:1)
重新阅读您的问题后,我很确定这是您想要的:
def parse(self, response):
users = response.xpath('//a[contains(@data-hook, "review-author")]/text()').extract()
titles = response.xpath('//a[contains(@data-hook, "review-title")]/text()').extract()
dates = response.xpath('//span[contains(@data-hook, "review-date")]/text()').extract()
found_helpful = response.xpath('//span[contains(@data-hook, "helpful-vote-statement")]/text()').extract()
rating = response.xpath('//i[contains(@data-hook, "review-star-rating")]/span[contains(@class, "a-icon-alt")]/text()').extract()
content = response.xpath('//span[contains(@data-hook, "review-body")]/text()').extract()
for user, title, date, found_helpful, rating, content in zip(users, titles, dates, found_helpful, rating, content):
yield {
'user': user,
'title': title,
'date': date,
'found_helpful': found_helpful,
'rating': rating,
'content': content
}
或类似的东西。这就是我在第一个评论中试图暗示的内容。
答案 1 :(得分:0)
编辑:我可以通过使用.css方法而不是.xpath提出解决方案。我用来从时尚零售商处抓取衬衫清单的蜘蛛:
import scrapy
from ..items import ProductItem
class SportsdirectSpider(scrapy.Spider):
name = 'sportsdirect'
allowed_domains = ['www.sportsdirect.com']
start_urls = ['https://www.sportsdirect.com/mens/mens-shirts']
def parse(self, response):
products = response.css('.s-productthumbbox')
for p in products:
brand = p.css('.productdescriptionbrand::text').extract_first()
name = p.css('.productdescriptionname::text').extract_first()
price = p.css('.curprice::text').extract_first()
item = ProductItem()
item['brand'] = brand
item['name'] = name
item['price'] = price
yield item
相关的items.py脚本:
import scrapy
class ProductItem(scrapy.Item):
name = scrapy.Field()
brand = scrapy.Field()
name = scrapy.Field()
price = scrapy.Field()
创建json行文件(在Anaconda提示符下):
>>> cd simple_crawler
>>> scrapy crawl sportsdirect --set FEED_URI=products.jl
用于将创建的.jl文件转换为数据帧的代码:
import json
import pandas as pd
contents = open('products3.jl', "r").read()
data = [json.loads(str(item)) for item in contents.strip().split('\n')]
df2 = pd.DataFrame(data)
最终输出:
brand name price
0 Pierre Cardin Short Sleeve Shirt Mens £6.50
1 Pierre Cardin Short Sleeve Shirt Mens £7.00
...