请任何人帮助。请指出将提取的评论写到hotelreview.csv的3个单独列中时我错了,我该如何解决才能将它们写成1列?以及如何根据以下代码为其添加标题名称“评论”。 我也想将新提取的数据(“评论”列)添加到现有的csv'hotel_FortWorth.csv'中。我只是将提取的信息添加到新的csv中,我不知道如何将2个文件组合在一起或以任何其他方式?网址可以重复以匹配评论。请! 谢谢!
文件“ hotel_FortWorth.csv”具有3列,例如:
Name link
1 Omni Fort Worth Hotel https://www.tripadvisor.com.au/Hotel_Review-g55857-d777199-Reviews-Omni_Fort_Worth_Hotel-Fort_Worth_Texas.html
2 Hilton Garden Hotel https://www.tripadvisor.com.au/Hotel_Review-g55857-d2533205-Reviews-Hilton_Garden_Inn_Fort_Worth_Medical_Center-Fort_Worth_Texas.html
3......
...
我使用现有csv中的网址提取评论,其代码如下所示:
import requests
from unidecode import unidecode
from bs4 import BeautifulSoup
import pandas as pd
file = []
data = pd.read_csv('hotel_FortWorth.csv', header = None)
df = data[2]
for url in df[1:]:
print(url)
thepage = requests.get(url).text
soup = BeautifulSoup(thepage, "html.parser")
resultsoup = soup.find_all("p", {"class": "partial_entry"})
file.extend(resultsoup)
with open('hotelreview.csv', 'w', newline='') as fid:
for review in file:
review_list = review.get_text()
fid.write(unidecode(review_list+'\n'))
预期结果:
name link review
1 ... ... ...
2
....
答案 0 :(得分:0)
您可以用熊猫来创建新的CSV。
例如:
import requests
from unidecode import unidecode
from bs4 import BeautifulSoup
import pandas as pd
data = pd.read_csv('hotel_FortWorth.csv')
review = []
for url in data["link"]:
print(url)
thepage = requests.get(url).text
soup = BeautifulSoup(thepage, "html.parser")
resultsoup = soup.find_all("p", {"class": "partial_entry"})
review.append(unidecode(resultsoup))
data["review"] = review
data.to_csv('hotelreview.csv')