我到处查看了许多破坏ESPN梦幻足球联赛的例子。我是网络爬虫的新手,但是由于这个原因,我在发布之前已经对其进行了广泛的研究。我无法进入我的联赛并获得任何有用的东西。我收集到,您应该在请求中传递cookie,以识别自己是否已进入私人联赛。
import requests
from bs4 import BeautifulSoup
page = requests.get('https://fantasy.espn.com/football/league?leagueId=########',
cookies={'SWID': '#######', 'espn_s2': '#######'}
)
soup = BeautifulSoup(page.text, 'html.parser')
test = soup.find_all(class_ = 'team-scores')
print(len(test))
print(type(test))
print(test)
0
class'bs4.element.ResultSet'
[]
基于本文https://stmorse.github.io/journal/espn-fantasy-python.html中引用的一些帖子以及本文本身,cookie似乎很重要,可以将其传递给它,而在没有cookie的情况下执行请求将获得相同的结果。我比较了如果使用和不使用饼干的汤,它们等效。
我知道ESPN上有可用的API,但是我无法设法使任何代码对我来说都有效。我希望刮擦球队的名称,然后从每个球队取得成绩,并为球队制定所有可能的时间表,以分配结果,看看每个球队在我的联赛中有多幸运或不走运。我也对与Yahoo做到这一点感到好奇。在这一点上,我可以轻松地手动获取数据,因为数据太多了,但是我想要一个更通用的表格。
对于没有经验的网络爬虫,任何建议或帮助都将不胜感激。
答案 0 :(得分:0)
您必须共享您的联赛ID供我测试,但是这里有一些代码可以对联赛进行一些数据处理。基本上,您将获得以json格式返回的数据,然后需要进行解析以基于每周积分计算赢/输。然后,您可以排序以创建一个最终表,以将常规赛季的记录与总记录进行比较,并根据时间表查看哪些队在上/下进行了比赛:
import requests
import pandas as pd
s = requests.Session()
r = s.get('https://www.espn.com')
swid = s.cookies.get_dict()['SWID']
league_id = 31181
url = 'https://fantasy.espn.com/apis/v3/games/ffl/seasons/2019/segments/0/leagues/%s' %league_id
r = requests.get(url, cookies={"swid": swid}).json()
#Get Team IDs
teamId = {}
for team in r['teams']:
teamId[team['id']] = team['location'].strip() + ' ' + team['nickname'].strip()
#Get each team's weekly points and calculate their head-to-head records
weeklyPoints = {}
r = requests.get(url, cookies={"swid": swid}, params={"view": "mMatchup"}).json()
weeklyPts = pd.DataFrame()
for each in r['schedule']:
#each = r['schedule'][0]
week = each['matchupPeriodId']
if week >= 14:
continue
homeTm = teamId[each['home']['teamId']]
homeTmPts = each['home']['totalPoints']
try:
awayTm = teamId[each['away']['teamId']]
awayTmPts = each['away']['totalPoints']
except:
homeTmPts = 'BYE'
continue
temp_df = pd.DataFrame(list(zip([homeTm, awayTm], [homeTmPts, awayTmPts], [week, week])), columns=['team','pts','week'])
if homeTmPts > awayTmPts:
temp_df.loc[0,'win'] = 1
temp_df.loc[0,'loss'] = 0
temp_df.loc[0,'tie'] = 0
temp_df.loc[1,'win'] = 0
temp_df.loc[1,'loss'] = 1
temp_df.loc[1,'tie'] = 0
elif homeTmPts < awayTmPts:
temp_df.loc[0,'win'] = 0
temp_df.loc[0,'loss'] = 1
temp_df.loc[0,'tie'] = 0
temp_df.loc[1,'win'] = 1
temp_df.loc[1,'loss'] = 0
temp_df.loc[1,'tie'] = 0
elif homeTmPts == awayTmPts:
temp_df.loc[0,'win'] = 0
temp_df.loc[0,'loss'] = 0
temp_df.loc[0,'tie'] = 1
temp_df.loc[1,'win'] = 0
temp_df.loc[1,'loss'] = 0
temp_df.loc[1,'tie'] = 1
weeklyPts = weeklyPts.append(temp_df, sort=True).reset_index(drop=True)
weeklyPts['win'] = weeklyPts.groupby(['team'])['win'].cumsum()
weeklyPts['loss'] = weeklyPts.groupby(['team'])['loss'].cumsum()
weeklyPts['tie'] = weeklyPts.groupby(['team'])['tie'].cumsum()
# Calculate each teams record compared to all other teams points week to week
cumWeeklyRecord = {}
for week in weeklyPts[weeklyPts['pts'] > 0]['week'].unique():
df = weeklyPts[weeklyPts['week'] == week]
cumWeeklyRecord[week] = {}
for idx, row in df.iterrows():
team = row['team']
pts = row['pts']
win = len(df[df['pts'] < pts])
loss = len(df[df['pts'] > pts])
tie = len(df[df['pts'] == pts])
cumWeeklyRecord[week][team] = {}
cumWeeklyRecord[week][team]['win'] = win
cumWeeklyRecord[week][team]['loss'] = loss
cumWeeklyRecord[week][team]['tie'] = tie-1
# Combine those cumluative records to get an overall season record
overallRecord = {}
for each in cumWeeklyRecord.items():
for team in each[1].keys():
if team not in overallRecord.keys():
overallRecord[team] = {}
win = each[1][team]['win']
loss = each[1][team]['loss']
tie = each[1][team]['tie']
if 'win' not in overallRecord[team].keys():
overallRecord[team]['win'] = win
else:
overallRecord[team]['win'] += win
if 'loss' not in overallRecord[team].keys():
overallRecord[team]['loss'] = loss
else:
overallRecord[team]['loss'] += loss
if 'tie' not in overallRecord[team].keys():
overallRecord[team]['tie'] = tie
else:
overallRecord[team]['tie'] += tie
# Little cleaning up of the data nd calculating win %
overallRecord_df = pd.DataFrame(overallRecord).T
overallRecord_df = overallRecord_df.rename_axis('team').reset_index()
overallRecord_df = overallRecord_df.rename(columns={'win':'overall_win', 'loss':'overall_loss','tie':'overall_tie'})
overallRecord_df['overall_win%'] = overallRecord_df['overall_win'] / (overallRecord_df['overall_win'] + overallRecord_df['overall_loss'] + overallRecord_df['overall_tie'])
overallRecord_df['overall_rank'] = overallRecord_df['overall_win%'].rank(ascending=False, method='min')
regularSeasRecord = weeklyPts[weeklyPts['week'] == 13][['team','win','loss', 'tie']]
regularSeasRecord['win%'] = regularSeasRecord['win'] / (regularSeasRecord['win'] + regularSeasRecord['loss'] + regularSeasRecord['tie'])
regularSeasRecord['rank'] = regularSeasRecord['win%'].rank(ascending=False, method='min')
final_df = overallRecord_df.merge(regularSeasRecord, how='left', on=['team'])
输出:
print (final_df.sort_values('rank').to_string())
team overall_loss overall_tie overall_win overall_win% overall_rank win loss tie win% rank
0 Luck Dynasty 39 0 104 0.727273 1.0 12.0 1.0 0.0 0.923077 1.0
10 Warsaw Widow Makers 48 0 95 0.664336 3.0 10.0 3.0 0.0 0.769231 2.0
2 Team Powell 60 0 83 0.580420 5.0 8.0 5.0 0.0 0.615385 3.0
1 Team White 46 0 97 0.678322 2.0 7.0 6.0 0.0 0.538462 4.0
3 The SouthWest Slingers 55 0 88 0.615385 4.0 7.0 6.0 0.0 0.538462 4.0
5 U MAD BRO? 71 0 72 0.503497 6.0 7.0 6.0 0.0 0.538462 4.0
11 Team Troxell 88 0 55 0.384615 9.0 7.0 6.0 0.0 0.538462 4.0
6 Organized Chaos 72 0 71 0.496503 7.0 6.0 7.0 0.0 0.461538 8.0
7 Jobobes Jabronis 88 0 55 0.384615 9.0 6.0 7.0 0.0 0.461538 8.0
4 Killa Bees!! 98 0 45 0.314685 11.0 4.0 9.0 0.0 0.307692 10.0
9 Faceless Men 86 0 57 0.398601 8.0 3.0 10.0 0.0 0.230769 11.0
8 Rollin with Mahomies 107 0 36 0.251748 12.0 1.0 12.0 0.0 0.076923 12.0