如何循环和保存每个迭代中的数据

时间:2019-09-26 17:41:58

标签: python loops for-loop beautifulsoup

我正在尝试学习如何在python中从网页中抓取数据,并且在使用python构建嵌套循环方面遇到了麻烦。我在解决这个问题(How to pull links from within an 'a' tag)方面获得了一些帮助。我试图让代码本质上遍历网页的不同星期(最终是数年)。我目前的情况在下面,但是并没有遍历我想要的两周并保存下来。

import requests, re, json
from bs4 import BeautifulSoup
weeks=['1','2']
data = pd.DataFrame(columns=['Teams','Link'])

scripts_head = soup.find('head').find_all('script')
all_links = {}
for i in weeks:
    r = requests.get(r'https://www.espn.com/college-football/scoreboard/_/year/2018/seasontype/2/week/'+i)
    soup = BeautifulSoup(r.text, 'html.parser')
    for script in scripts_head:
        if 'window.espn.scoreboardData' in script.text:
            json_scoreboard = json.loads(re.search(r'({.*?});', script.text).group(1))
            for event in json_scoreboard['events']:
                name = event['name']
                for link in event['links']:
                    if link['text'] == 'Gamecast':
                        gamecast = link['href']
                all_links[name] = gamecast
                #Save data to dataframe
                data2=pd.DataFrame(list(all_links.items()),columns=['Teams','Link'])
        #Append new data to existing data        
        data=data.append(data2,ignore_index = True)


#Save dataframe with all links to csv for future use
data.to_csv(r'game_id_data.csv')

编辑:因此,为了澄清起见,它从一周开始创建重复数据,并重复将其附加到末尾。我还编辑了代码以包含适当的库,应该可以将其复制并粘贴并在python中运行。

2 个答案:

答案 0 :(得分:0)

问题出在您的循环逻辑中

    if 'window.espn.scoreboardData' in script.text:
        ...
            data2=pd.DataFrame(list(all_links.items()),columns=['Teams','Link'])
    #Append new data to existing data        
    data=data.append(data2,ignore_index = True)

您在最后一行的缩进是错误的。如给定的那样,无论是否有新的记分板数据,都将附加data2。否则,您可以跳过if正文,而只需附加先前的data2值。

答案 1 :(得分:0)

所以我想出的解决方法如下,我仍然在最终数据集中得到重复的游戏ID,但是至少我要遍历整个所需的集合并获取所有的ID。然后最后我进行重复数据删除。

import requests, re, json
from bs4 import BeautifulSoup
import csv
import pandas as pd

years=['2015','2016','2017','2018']
weeks=['1','2','3','4','5','6','7','8','9','10','11','12','13','14']
data = pd.DataFrame(columns=['Teams','Link'])

all_links = {}
for year in years:
    for i in weeks:
        r = requests.get(r'https://www.espn.com/college-football/scoreboard/_/year/'+ year + '/seasontype/2/week/'+i)
        soup = BeautifulSoup(r.text, 'html.parser')
        scripts_head = soup.find('head').find_all('script')
        for script in scripts_head:
            if 'window.espn.scoreboardData' in script.text:
                json_scoreboard = json.loads(re.search(r'({.*?});', script.text).group(1))
                for event in json_scoreboard['events']:
                    name = event['name']
                    for link in event['links']:
                        if link['text'] == 'Gamecast':
                            gamecast = link['href']
                    all_links[name] = gamecast
                #Save data to dataframe
                data2=pd.DataFrame(list(all_links.items()),columns=['Teams','Link'])
                #Append new data to existing data        
                data=data.append(data2,ignore_index = True)


#Save dataframe with all links to csv for future use
data_test=data.drop_duplicates(keep='first')
data_test.to_csv(r'all_years_deduped.csv')