尝试从多个网页中抓取表格并将其存储在列表中。该列表将第一个网页的结果打印3次。
import pandas as pd
import requests
from bs4 import BeautifulSoup
dflist = []
for i in range(1,4):
s = requests.Session()
res = requests.get(r'http://www.ironman.com/triathlon/events/americas/ironman/world-championship/results.aspx?p=' + str(i) + 'race=worldchampionship&rd=20181013&agegroup=Pro&sex=M&y=2018&ps=20#axzz5VRWzxmt3')
soup = BeautifulSoup(res.content,'lxml')
table = soup.find_all('table')
dfs = pd.read_html(str(table))
dflist.append(dfs)
s.close()
print(dflist)
答案 0 :(得分:2)
您在&
之后遗漏了'?p=' + str(i)
,因此您的所有请求都将p
设置为${NUMBER}race=worldchampionship
,ironman.com可能没有任何意义,只是忽略。在&
的开头插入'race=worldchampionship'
。
为防止将来发生此类错误,您可以将URL的查询参数作为dict
传递给params
关键字参数,如下所示:
params = {
"p": i,
"race": "worldchampionship",
"rd": "20181013",
"agegroup": "Pro",
"sex": "M",
"y": "2018",
"ps": "20",
}
res = requests.get(r'http://www.ironman.com/triathlon/events/americas/ironman/world-championship/results.aspx#axzz5VRWzxmt3', params=params)