我正在尝试使用BeautifulSoup来抓取.xls表格,可以从Xcel Energy的网站(https://www.xcelenergy.com/working_with_us/municipalities/community_energy_reports)下载。
此函数获取表的URL链接并尝试下载它们:
url = 'https://www.xcelenergy.com/working_with_us/municipalities/community_energy_reports'
dir = 'C:/Users/aobrien/PycharmProjects/xceldatascraper/'
def scraper(page):
from bs4 import BeautifulSoup as bs
import urllib.request
import requests
import os
import re
tld = r'https://www.xcelenergy.com'
pageobj = requests.get(page, verify=False)
sp = bs(pageobj.content, 'html.parser')
xlst, fnms = [], []
links = [a['href'] for a in sp.find_all('a', attrs={'href': re.compile("/staticfiles/")})]
for idx, a in enumerate(links):
if a.endswith('.xls'):
furl = tld + str(a)
xlst.append(furl)
fnms.append(a.split('/')[4])
naur = zip(fnms, xlst)
if not os.path.exists(dir + 'tables'):
os.makedirs(dir + 'tables')
for name, url in naur:
print(url)
res = urllib.request.urlopen(url)
xls = open(dir + 'tables/' + name, 'wb')
xls.write(res.read())
xls.close()
scraper(url)
当urllib.request.urlopen(url)尝试访问该文件时,脚本失败,返回“urllib.error.HTTPError:HTTP错误404:未找到”。 “print(url)”语句打印我有脚本构造的URL(https://www.xcelenergy.com/staticfiles/xe-responsive/Working With Us / MI-City-Forest-Lake-2016.xls),并手动将该url粘贴到浏览器中下载文件很好。
我错过了什么?