为什么我不能用Python解析这个HTML页面?

时间:2015-02-21 22:45:08

标签: python html beautifulsoup

我正在尝试使用BeautifulSoup在Python中的网页http://www.baseball-reference.com/teams/BOS/2000-pitching.shtml中解析信息。我想打印出“Team Pitching”表中每个玩家的相应名称。但是,代码在某个特定名称之后重复玩家的名字(在这种情况下,在第15行之后,它重复名称'Pedro Martinez' )。例如:

1 Pedro Martinez
2 Jeff Fassero*
3 Ramon Martinez
4 Pete Schourek*
5 Rolando Arrojo
6 Tomo Ohka
7 Derek Lowe
8 Tim Wakefield
9 Rich Garces
10 Rheal Cormier*
11 Hipolito Pichardo
12 Brian Rose
13 Bryce Florie
14 John Wasdin
15 Pedro Martinez
16 Jeff Fassero*
17 Ramon Martinez
18 Pete Schourek*
19 Rolando Arrojo
20 Tomo Ohka
21 Derek Lowe
22 Tim Wakefield
23 Rich Garces
24 Rheal Cormier*
25 Hipolito Pichardo
26 Brian Rose
27 Bryce Florie
28 John Wasdin

你知道发生了什么吗?这是我的代码:

# Sample web page
#http://www.baseball-reference.com/teams/BOS/2000-pitching.shtml


import urllib2
import lxml
from bs4 import BeautifulSoup

# Download webpages 2010 webpage

y = 2000
url = 'http://www.baseball-reference.com/teams/BOS/'+ str(y) +'-pitching.shtml'
print 'Download from :', url

#dowlnload
filehandle = urllib2.urlopen(url)


fileout = 'YEARS'+str(y)+'.html'
print 'Save to : ', fileout, '\n'

#save file to disk
f = open(fileout,'w')
f.write(filehandle.read())
f.close()


# Read and parse the html file

# Parse information about the age of players in 2000

y = 2000

filein = 'YEARS' + str(y) + '.html'
print(filein)
soup = BeautifulSoup(open(filein))


entries = soup.find_all('tr', attrs={'class' : ''}) #' non_qual' ''
print(len(entries)) 

i = 0
for entry in entries:


    columns = entry.find_all('td')  
    #print(len(columns), 'i:', i)
    if len (columns)==34: # Number of columns of the table

        i = i + 1

        #print i, len(columns)  
        age = columns[2].get_text()

        print i, age

1 个答案:

答案 0 :(得分:0)

您试图循环遍历表中的所有行,而不是先实际获取所有表标记。因此,您获取所有表标记,然后循环遍历表标记中的所有tr标记,如果这是有意义的。此外,yeartable未定义,因此我认为年份为y并且table变量为t。作为旁注,您不必下载HTML然后打开它来解析它,您可以通过获取连接文本并直接解析来获取HTML。

import urllib2
from bs4 import BeautifulSoup

# Download webpages 2010 webpage

y = 2010
url = 'http://www.baseball-reference.com/teams/BOS/'+ str(y) +'-pitching.shtml'
print 'Download from :', url

#dowlnload
filehandle = urllib2.urlopen(url)


fileout = 'YEARS'+str(y)+'.html'
print 'Save to : ', fileout, '\n'

#save file to disk
f = open(fileout,'w')
f.write(filehandle.read())
f.close()


# Read and parse the html file

# Parse information about the age of players in 2000

y = 2010

filein = 'YEARS' + str(y) + '.html'
print(filein)
soup = BeautifulSoup(open(filein))


table = soup.find_all('table', attrs={'id': 'team_pitching'}) #' non_qual' ''


for t in table:

    i = 1
    entries = t.find_all('tr', attrs={'class' : ''}) #' non_qual' ''
    print(len(entries))
    for entry in entries:
        columns = entry.find_all('td')
        printString = str(i) + ' '
        for col in columns:
            try:
                if ((',' in col['csk']) and (col['csk'] != '')):
                    printString = printString + col.text
                    i = i + 1
                    print printString
            except:
                pass