我是网络抓狂的新手,现在我尝试了解它,以便与朋友自动进行有关德国德甲的博彩竞赛。 (我们使用的平台是kicktipp.de)。我已经设法登录该网站并使用python发布足球结果。不幸的是,到目前为止,那些只是泊松分布的随机数。为了改善这一点,我的想法是从bwin下载赔率。更准确地说,我尝试下载确切结果的几率。这里出现了问题。到目前为止,我无法用BeautifulSoup提取那些。使用谷歌浏览器我试着了解我需要的html代码的哪一部分。但由于某些原因,我无法找到BeautifulSoup的那些部分。 我的代码目前看起来像这样:
from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup
my_url = "https://sports.bwin.com/de/sports/4/wetten/fußball#categoryIds=192&eventId=&leagueIds=43&marketGroupId=&page=0&sportId=4&templateIds=0.8649061927316986"
# opening up connection, grabbing the page
uClient = uReq(my_url)
page_html = uClient.read()
uClient.close()
# html parsing
page_soup = soup(page_html, "html.parser")
containers1 = page_soup.findAll("div", {"class": "marketboard-event-
group__item--sub-group"})
print(len(containers1))
containers2 = page_soup.findAll("table", {"class": "marketboard-event-with-
header__markets-list"})
print(len(containers2))
从我已经看到的容器的长度来看,它们包含的物品比我预期的要多,或者因为不明原因它们是空的......希望你能引导我。提前致谢!
答案 0 :(得分:5)
您可以将selenium
与ChromeDriver
一起使用来抓取生成JavaScript内容的网页,因为这是这种情况。
from selenium import webdriver
from bs4 import BeautifulSoup
url = "https://sports.bwin.com/de/sports/4/wetten/fußball#categoryIds=192&eventId=&leagueIds=43&marketGroupId=&page=0&sportId=4&templateIds=0.8649061927316986"
driver = webdriver.Chrome()
driver.get(url)
soup = BeautifulSoup(driver.page_source, 'html.parser')
containers = soup.findAll("table", {"class": "marketboard-event-with-header__markets-list"})
现在containers
确实有我们想要的,表格元素,检查更多,很容易看到我们想要的文本处于交替的<div>
标记中,因此我们可以使用zip
和iter
创建结果和赔率元组的列表,交替divs
列表元素:
resultAndOdds = []
for container in containers:
divs = container.findAll('div')
texts = [div.text for div in divs]
it = iter(texts)
resultAndOdds.append(list(zip(it, it)))
演示:
>>> resultAndOdds[0]
[('1:0', '9.25'), ('0:0', '7.25'), ('0:1', '7.50'), ('2:0', '16.00'), ('1:1', '6.25'), ('0:2', '10.00'), ('2:1', '11.50'), ('2:2', '15.00'), ('1:2', '9.25'), ('3:0', '36.00'), ('3:3', '51.00'), ('0:3', '19.50'), ('3:1', '26.00'), ('4:4', '251.00'), ('1:3', '17.00'), ('3:2', '36.00'), ('2:3', '29.00'), ('4:0', '126.00'), ('0:4', '51.00'), ('4:1', '101.00'), ('1:4', '41.00'), ('4:2', '151.00'), ('2:4', '81.00'), ('4:3', '251.00'), ('3:4', '251.00'), ('Jedes andere Ergebnis', '29.00')]
>>> resultAndOdds[1]
[('1:0', '5.00'), ('0:0', '2.65'), ('0:1', '4.10'), ('2:0', '15.50'), ('1:1', '7.25'), ('0:2', '10.50'), ('2:1', '21.00'), ('2:2', '67.00'), ('1:2', '18.00'), ('3:0', '81.00'), ('3:3', '251.00'), ('0:3', '36.00'), ('3:1', '126.00'), ('4:4', '251.00'), ('1:3', '81.00'), ('3:2', '251.00'), ('2:3', '251.00'), ('4:0', '251.00'), ('0:4', '201.00'), ('4:1', '251.00'), ('1:4', '251.00'), ('4:2', '251.00'), ('2:4', '251.00'), ('4:3', '251.00'), ('3:4', '251.00'), ('Jedes andere Ergebnis', '251.00')]
>>> len(resultAndOdds)
24
根据您希望数据的样子,您还可以使用以下内容获取每个表的标题:
titlesElements = soup.findAll("div", {"class":"marketboard-event-with-header__market-name"})
titlesTexts = [title.text for title in titlesElements]