我正在尝试从Wikipedia页面(这是某些年份的前100首单曲)中抓取数据,同时将输出保存到1951-1959年得到的csv中,然后出现错误:
第43行,在 writer.writerow(songs)文件“ C:\ Python36_64 \ lib \ encodings \ cp1252.py”,第19行,编码 返回codecs.charmap_encode(input,self.errors,encoding_table)[0] UnicodeEncodeError:“ charmap”编解码器无法在其中编码字符“ \ u0107” 位置29:字符映射到
代码:
from bs4 import BeautifulSoup
import requests
import csv
data = []
def scrape_data(search_year):
year_data = []
url = f'https://en.wikipedia.org/wiki/Billboard_Year-End_Hot_100_singles_of_{str(search_year)}'
# Get a source code from url
r = requests.get(url).text
soup = BeautifulSoup(r, 'html.parser')
# Isolate the table part from the source code
table = soup.find('table', attrs={'class': 'wikitable'})
# Extract every row of the table
rows = table.find_all('tr')
# Iterate through every row
for row in rows[1:]:
# Extract cols (with tags td and th)
cols = row.find_all(['td', 'th'])
# List comprehension (create a list of lists, list of rows, in which every row is a list of table text)
year_data.append([col.text.replace('\n', '') for col in cols])
# Add the year, this data is from to the beginning of the list
for n in year_data:
n.insert(0, search_year)
return year_data
for year in range(1951, 2019):
try:
data.append(scrape_data(year))
print(f'Year {str(year)} Scrapped')
except AttributeError as e:
print(f'Year {str(year)} is not aviable')
writer = csv.writer(open('songs.csv', 'w'), delimiter=',', lineterminator='\n', quotechar='"')
for year_data in data:
for songs in year_data:
writer.writerow(songs)
print(songs)
答案 0 :(得分:2)
我认为您可以在编写输出时通过使用正确的unicode编码来纠正此问题:
writer = csv.writer(open('songs.csv', 'w', encoding='utf-8'),
delimiter=',', lineterminator='\n', quotechar='"')