Parse网站获取NBA球队RPI的表格数据

时间:2018-02-22 17:19:58

标签: python parsing beautifulsoup python-requests lxml

我想创建一个python程序,接受NBA团队的用户输入并从http://www.espn.com/nba/stats/rpi返回该团队的RPI。

我一直在玩lxml导入html / import请求以及漂亮的汤,但无法找到解决方案。

我认为让我失望的是桌子上的课程是奇怪的球队-46-14或者是平均球队-46-3。如果列表在网站上更新并且团队的行更改,他们可能不再是奇数行甚至是行。

2 个答案:

答案 0 :(得分:0)

您可以创建一个字典,其中每个团队名称作为键,数据作为列表:

import urllib
from bs4 import BeautifulSoup as soup
import re
data = str(urllib.urlopen('http://www.espn.com/nba/stats/rpi').read())
team_data = [re.split('\n', i.text) for i in soup(data, 'lxml').find_all('tr', {'class':re.compile('oddrow team\-\d+\-\d+')})]
final_team_data = {a[1]:a[2:] for a in team_data}

输出:

{u'Toronto': [u'.552', u'41', u'16', u'.719', u'.497', u'0', u'6376', u'5892', u'45-12', u'.786', u''], u'Phoenix': [u'.449', u'18', u'41', u'.305', u'.498', u'0', u'6129', u'6657', u'12-47', u'.204', u''], u'LA Lakers': [u'.477', u'23', u'34', u'.404', u'.502', u'0', u'6114', u'6282', u'22-35', u'.390', u''], u'Dallas': [u'.461', u'18', u'40', u'.310', u'.511', u'0', u'5920', u'6058', u'24-34', u'.406', u''], u'Miami': [u'.505', u'30', u'28', u'.517', u'.501', u'0', u'5830', u'5882', u'27-31', u'.463', u''], u'Washington': [u'.509', u'33', u'24', u'.579', u'.486', u'0', u'6123', u'6015', u'33-24', u'.573', u''], u'Philadelphia': [u'.528', u'30', u'25', u'.545', u'.522', u'0', u'5912', u'5803', u'32-23', u'.576', u''], u'Denver': [u'.511', u'32', u'26', u'.552', u'.497', u'0', u'6256', u'6196', u'31-27', u'.540', u''], u'Minnesota': [u'.524', u'36', u'25', u'.590', u'.502', u'0', u'6694', u'6515', u'37-24', u'.610', u''], u'Brooklyn': [u'.456', u'19', u'40', u'.322', u'.500', u'0', u'6217', u'6468', u'20-39', u'.342', u''], u'New York': [u'.470', u'23', u'36', u'.390', u'.496', u'0', u'6116', u'6260', u'24-35', u'.405', u''], u'Detroit': [u'.500', u'28', u'29', u'.491', u'.503', u'0', u'5893', u'5899', u'28-29', u'.496', u''], u'Oklahoma City': [u'.512', u'33', u'26', u'.559', u'.496', u'0', u'6289', u'6088', u'37-22', u'.631', u''], u'Golden State': [u'.569', u'44', u'14', u'.759', u'.506', u'0', u'6719', u'6249', u'45-13', u'.768', u''], u'San Antonio': [u'.515', u'35', u'24', u'.593', u'.489', u'0', u'5993', u'5811', u'37-22', u'.625', u'']}

答案 1 :(得分:0)

您可以使用select提取表格数据:

>>> import urllib.request
>>> from bs4 import BeautifulSoup
>>> soup = BeautifulSoup(urllib.request.urlopen("http://www.espn.com/nba/stats/rpi"), "lxml")
>>> data = [[x.text for x in row.find_all("td")] for row in soup.select("table tr")]
>>> for row in data:
...     print(row)
... 
['2017-18 NBA RPI Rankings']
['RK', 'TEAM', 'RPI', 'W', 'L', 'PCT', 'SOS', 'PWR', 'PF', 'PA', 'EWL', 'EWP']
['1', 'Golden State', '.569', '44', '14', '.759', '.506', '0', '6719', '6249', '45-13', '.768']
['2', 'Houston', '.562', '44', '13', '.772', '.492', '0', '6504', '6007', '45-12', '.788']
['3', 'Toronto', '.552', '41', '16', '.719', '.497', '0', '6376', '5892', '45-12', '.786']
[... removed ...]
['30', 'Atlanta', '.444', '18', '41', '.305', '.490', '0', '6116', '6360', '20-39', '.344']

在那里可以处理一些不间断的空格\xa0字符。如果这是你对表做的唯一事情,那就足够了。如果您要将此视为 data 表,而不仅仅是查找表,您可能需要考虑使用pandas的read_html:

>>> pd.read_html("http://www.espn.com/nba/stats/rpi", header=0, skiprows=1)[0]
      RK           TEAM    RPI   W   L    PCT    SOS  PWR    PF    PA    EWL    EWP
0    1.0   Golden State  0.569  44  14  0.759  0.506    0  6719  6249  45-13  0.768
1    2.0        Houston  0.562  44  13  0.772  0.492    0  6504  6007  45-12  0.788
2    3.0        Toronto  0.552  41  16  0.719  0.497    0  6376  5892  45-12  0.786
[etc.]