我正试图从以下网站上抓取表格数据:https://fantasyfootball.telegraph.co.uk/premier-league/statscentre/
目标是获取所有玩家数据并将其存储在字典中。
我正在使用BeautifulSoup,并且能够从html内容中找到该表,但是返回的表主体为空。
通过阅读其他文章,我发现这可能与网站加载网站后网站加载表数据的速度缓慢有关,但是我找不到解决此问题的方法。
我的代码如下:
from bs4 import BeautifulSoup
import requests
# Make a GET request to feth the raw HTML content
html_content = requests.get(url).text
# Parse the html content
soup = BeautifulSoup(html_content, "lxml")
# Find the Title Data within the website
player_table = soup.find("table", attrs={"class": "player-profile-content"})
print(player_table)
我得到的结果是这样:
<table class="playerrow playlist" id="table-players">
<thead>
<tr class="table-head"></tr>
</thead>
<tbody></tbody>
</table>
网站上实际的HTML代码很长,因为它们将大量数据打包到每个<tr>
和随后的<td>
中,因此除非有人询问,否则我不会在此处发布。
可以说在标题行中有几行<td>
行,在正文中有几行<tr>
行。
答案 0 :(得分:3)
此脚本将打印所有玩家统计信息(数据通过Json从外部URL加载):
import ssl
import json
import requests
from urllib3 import poolmanager
# workaround to avoid SSL errors:
class TLSAdapter(requests.adapters.HTTPAdapter):
def init_poolmanager(self, connections, maxsize, block=False):
"""Create and initialize the urllib3 PoolManager."""
ctx = ssl.create_default_context()
ctx.set_ciphers('DEFAULT@SECLEVEL=1')
self.poolmanager = poolmanager.PoolManager(
num_pools=connections,
maxsize=maxsize,
block=block,
ssl_version=ssl.PROTOCOL_TLS,
ssl_context=ctx)
url = 'https://fantasyfootball.telegraph.co.uk/premier-league/json/getstatsjson'
session = requests.session()
session.mount('https://', TLSAdapter())
data = session.get(url).json()
# uncomment this to print all data:
# print(json.dumps(data, indent=4))
for s in data['playerstats']:
for k, v in s.items():
print('{:<15} {}'.format(k, v))
print('-'*80)
打印:
SUSPENSION None
WEEKPOINTS 0
TEAMCODE MCY
SXI 34
PLAYERNAME de Bruyne, K
FULLCLEAN -
SUBS 3
TEAMNAME Man City
MISSEDPEN 0
YELLOWCARD 3
CONCEED -
INJURY None
PLAYERFULLNAME Kevin de Bruyne
RATIO 40.7
PICKED 36
VALUE 5.6
POINTS 228
PARTCLEAN -
OWNGOAL 0
ASSISTS 30
GOALS 14
REDCARD 0
PENSAVE -
PLAYERID 3001
POS MID
--------------------------------------------------------------------------------
...and so on.
答案 1 :(得分:1)
一个简单的解决方案是监视网络流量并了解如何交换数据。您会看到数据来自GET调用Request URL: https://fantasyfootball.telegraph.co.uk/premier-league/json/getstatsjson
,它是一个漂亮的JSON,因此我们不需要BeautifulSoup。只需请求即可完成工作。
import requests
import pandas as pd
URI = 'https://fantasyfootball.telegraph.co.uk/premier-league/json/getstatsjson'
r = requests.get(URI)
data = r.json()
df = pd.DataFrame(data['playerstats'])
print(df.head()) # head show first 5 rows