如何使用Xpath抓取NHL速滑运动员统计信息?

时间:2018-11-03 02:43:34

标签: python parsing xpath web-scraping lxml

我正在努力了解2017/2018年NHL滑冰运动员的数据。我已经开始编写代码,但遇到解析数据并将其打印为excel的问题。

到目前为止,这是我的代码:

#import modules 

from urllib.request import urlopen
from lxml.html import fromstring

import pandas as pd

#connect to url

url = "https://www.hockey-reference.com/leagues/NHL_2018_skaters.html"

#remove HTML comment markup

content = str(urlopen(url).read())
comment = content.replace("-->","").replace("<!--","")
tree = fromstring(comment)

#setting up excel columns

columns = ("names", "gp", "g", "s", "team")
df = pd.DataFrame(columns=columns)    

#attempt at parsing data while using loop    

for nhl, skater_row in enumerate(tree.xpath('//table[contains(@class,"stats_table")]/tr')):
    names = pitcher_row.xpath('.//td[@data-stat="player"]/a')[0].text
    gp = skater_row.xpath('.//td[@data-stat="games_played"]/text()')[0]
    g = skater_row.xpath('.//td[@data-stat="goals"]/text()')[0]
    s = skater_row.xpath('.//td[@data-stat="shots"]/text()')[0]
    try:
        team = skater_row.xpath('.//td[@data-stat="team_id"]/a')[0].text

    # create pandas dataframe to export data to excel

    df.loc[nhl] = (names, team, gp, g, s)

#write data to excel

writer = pd.ExcelWriter('NHL skater.xlsx')
df.to_excel(writer, 'Sheet1')
writer.save()

有人可以解释如何解析此数据吗?您有什么技巧可以帮助编写Xpath,以便我可以遍历数据吗?

我在写线路时遇到麻烦:

for nhl, skater_row in enumerate(tree.xpath...

您如何找到Xpath?您是否使用过Xpath Finder或Xpath Helper?

此外,我在以下一行中遇到错误:

df.loc[nhl] = (names, team, gp, g, s)

它显示了df的无效语法。

我是网络爬虫的新手,没有任何编码经验。任何帮助将不胜感激。预先感谢您的宝贵时间!

2 个答案:

答案 0 :(得分:2)

如果您仍然希望使用XPath并仅获取必需的数据而不是过滤完整的数据,则可以尝试以下操作:

for row in tree.xpath('//table[@id="stats"]/tbody/tr[not(@class="thead")]'):
    name = row.xpath('.//td[@data-stat="player"]')[0].text_content()
    gp = row.xpath('.//td[@data-stat="games_played"]')[0].text_content()
    g = row.xpath('.//td[@data-stat="goals"]')[0].text_content()
    s = row.xpath('.//td[@data-stat="shots"]')[0].text_content()
    team = row.xpath('.//td[@data-stat="team_id"]')[0].text_content()

print(name, gp, g, s, team)的输出:

Justin Abdelkader 75 13 110 DET
Pontus Aberg 53 4 70 TOT
Pontus Aberg 37 2 39 NSH
Pontus Aberg 16 2 31 EDM
Noel Acciari 60 10 66 BOS
Kenny Agostino 5 0 11 BOS
Sebastian Aho 78 29 200 CAR
...

答案 1 :(得分:1)

IIUC:可以使用BeautifulSouppandas read_html

这样完成
import requests
import pandas
from bs4 import BeautifulSoup

url = 'https://www.hockey-reference.com/leagues/NHL_2018_skaters.html'
pg = requests.get(url)
bsf = BeautifulSoup(pg.content, 'html5lib')
tables = bsf.findAll('table', attrs={'id':'stats'})
dfs = pd.read_html(tables[0].prettify())
df = dfs[0]

结果数据框将具有表中的所有列,并使用熊猫过滤所需的列。

#Filters only columns 1, 3 and 5 similarly all required columns can be filtered.
dff = df[df.columns[[1, 3, 5]]]