这是我的代码,我正在抓取99acres网站的数据并将其存储在csv文件中但是当我这样做时它会给我一个错误信息' charmap'编解码器不能编码字符' \ u20b9' ...请告诉我它将如何解决......
import io
import csv
import requests
from bs4 import BeautifulSoup
response = requests.get('https://www.99acres.com/search/property/buy/residential-all/hyderabad?search_type=QS&search_location=HP&lstAcn=HP_R&lstAcnId=0&src=CLUSTER&preference=S&selected_tab=1&city=269&res_com=R&property_type=R&isvoicesearch=N&keyword_suggest=hyderabad%3B&fullSelectedSuggestions=hyderabad&strEntityMap=W3sidHlwZSI6ImNpdHkifSx7IjEiOlsiaHlkZXJhYmFkIiwiQ0lUWV8yNjksIFBSRUZFUkVOQ0VfUywgUkVTQ09NX1IiXX1d&texttypedtillsuggestion=hyder&refine_results=Y&Refine_Localities=Refine%20Localities&action=%2Fdo%2Fquicksearch%2Fsearch&suggestion=CITY_269%2C%20PREFERENCE_S%2C%20RESCOM_R&searchform=1&price_min=null&price_max=null')
html = response.text
soup = BeautifulSoup(html, 'html.parser')
list=[]
dealer = soup.findAll('div',{'class': 'srpWrap'})
for item in dealer:
try:
p = item.contents[1].find_all("div",{"class":"_srpttl srpttl fwn wdthFix480 lf"})[0].text
except:
p=''
try:
d = item.contents[1].find_all("div",{"class":"lf f13 hm10 mb5"})[0].text
except:
d=''
li=[p,d]
list.append(li)
with io.open('project.csv','w',encoding="utf-8") as file:
writer= csv.writer(file)
for row in list:
writer.writerows(row)
file.close()
答案 0 :(得分:0)
使用: .encode(encoding = 'UTF-8')