我正在尝试使用漂亮的汤在python中进行网页抓取,作为一个新手,他从[https://syntaxbytetutorials.com/beautifulsoup-4-python-web-scraping-to-csv-excel-file/]那里获取了源代码并开始进行实验。 现在,我有一个错误
模块'html5lib.treebuilders'没有属性'_base'`
如果有人向我解释了错误的原因并提供了解决方案,这将非常有帮助 :)
import urllib.request as urllib
import csv
import re
from bs4 import BeautifulSoup
rank_page = 'https://socialblade.com/youtube/top/50/mostviewed'
request = urllib.Request(rank_page, headers={'User-Agent':'Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Safari/537.36'})
page = urllib.urlopen(request)
soup = BeautifulSoup(page, 'html.parser')
channels = soup.find('div', attrs={'style': 'float: right; width: 900px;'}).find_all('div', recursive=False)[4:]
file = open('topyoutubers.csv', 'wb')
writer = csv.writer(file)
# write title row
writer.writerow(['Username', 'Uploads', 'Views'])
for channel in channels:
username = channel.find('div', attrs={'style': 'float: left; width: 350px; line-height: 25px;'}).a.text.strip()
uploads = channel.find('div', attrs={'style': 'float: left; width: 80px;'}).span.text.strip()
views = channel.find_all('div', attrs={'style': 'float: left; width: 150px;'})[1].span.text.strip()
print (username + ' ' + uploads + ' ' + views)
writer.writerow([username.encode('utf-8'), uploads.encode('utf-8'), views.encode('utf-8')])
file.close()
答案 0 :(得分:2)
尝试将“ _html5lib.py”中的“ _base”替换为“ _base”。
您可以显示错误的回溯吗?错误来自哪个行或文件。