如何自动化这个beautifulsoup导入

时间:2013-05-20 00:47:55

标签: python python-2.7 beautifulsoup

我正在从此网页导入包含得分的链接

http://www.covers.com/pageLoader/pageLoader.aspx?page=/data/wnba/teams/pastresults/2012/team665231.html

这就是我现在这样做的方式。我从第一页获得了链接。

url = 'http://www.covers.com/pageLoader/pageLoader.aspx?page=/data/wnba/teams/pastresults/2012/team665231.html'

boxurl = urllib2.urlopen(url).read()
soup = BeautifulSoup(boxurl)

boxscores = soup.findAll('a', href=re.compile('boxscore'))
basepath = "http://www.covers.com"
pages=[]          # This grabs the links from the page
for a in boxscores:
pages.append(urllib2.urlopen(basepath + a['href']).read())  

然后在新窗口中我这样做。

newsoup = pages[1]  # I am manually changing this every time

soup = BeautifulSoup(newsoup)
def _unpack(row, kind='td'):
    return [val.text for val in row.findAll(kind)]

tables = soup('table')
linescore = tables[1]   
linescore_rows = linescore.findAll('tr')
roadteamQ1 = float(_unpack(linescore_rows[1])[1])
roadteamQ2 = float(_unpack(linescore_rows[1])[2])
roadteamQ3 = float(_unpack(linescore_rows[1])[3])
roadteamQ4 = float(_unpack(linescore_rows[1])[4])  # add OT rows if ???
roadteamFinal = float(_unpack(linescore_rows[1])[-3])
hometeamQ1 = float(_unpack(linescore_rows[2])[1])
hometeamQ2 = float(_unpack(linescore_rows[2])[2])
hometeamQ3 = float(_unpack(linescore_rows[2])[3])
hometeamQ4 = float(_unpack(linescore_rows[2])[4])   # add OT rows if ???
hometeamFinal = float(_unpack(linescore_rows[2])[-3])    

misc_stats = tables[5]
misc_stats_rows = misc_stats.findAll('tr')
roadteam = str(_unpack(misc_stats_rows[0])[0]).strip()
hometeam = str(_unpack(misc_stats_rows[0])[1]).strip()
datefinder = tables[6]
datefinder_rows = datefinder.findAll('tr')

date = str(_unpack(datefinder_rows[0])[0]).strip()
year = 2012
from dateutil.parser import parse
parsedDate = parse(date)
date = parsedDate.replace(year)
month = parsedDate.month
day = parsedDate.day
modDate = str(day)+str(month)+str(year)
gameid = modDate + roadteam + hometeam

data = {'roadteam': [roadteam],
        'hometeam': [hometeam],
        'roadQ1': [roadteamQ1],
        'roadQ2': [roadteamQ2],   
        'roadQ3': [roadteamQ3],
        'roadQ4': [roadteamQ4],
        'homeQ1': [hometeamQ1],
        'homeQ2': [hometeamQ2],   
        'homeQ3': [hometeamQ3],
        'homeQ4': [hometeamQ4]}

globals()["%s" % gameid] = pd.DataFrame(data)
df = pd.DataFrame.load('df')
df = pd.concat([df, globals()["%s" % gameid]])
df.save('df')

如何自动执行此操作,以便我无需手动手动更改 newsoup = pages [1] ,并且可以一次性删除从第一个网址链接的所有框分数。我是python的新手,缺乏对基础知识的一些理解。

1 个答案:

答案 0 :(得分:1)

因此,在第一个代码框中,您收集了pages

所以在第二个代码框中你必须循环它,如果我理解它

for page in pages:
    soup = BeautifulSoup(page)
    # rest of the code here