这是我第一次进行网络抓取。我遵循了一个教程,但是我尝试刮取另一个页面,并且得到以下信息:
gamesplayed = data [1] .getText()
IndexError:列表索引超出范围
这是到目前为止的代码
from bs4 import BeautifulSoup
import urllib.request
import csv
urlpage = 'https://www.espn.com/soccer/standings/_/league/FIFA.WORLD/fifa-world-cup'
page = urllib.request.urlopen(urlpage)
soup = BeautifulSoup(page, 'html.parser')
#print(soup)
table = soup.find('table', attrs={'class': 'Table2__table__wrapper'})
results = table.find_all('tr')
#print('Number of results:', len(results))
rows = []
rows.append(['Group A', 'Games Played', 'Wins', 'Draws', 'Losses', 'Goals For', 'Goals Against', 'Goal Difference', 'Points'])
print(rows)
# loop over results
for result in results:
# find all columns per result
data = result.find_all('td')
# check that columns have data
if len(data) == 0:
continue
# write columns to variables
groupa = data[0].getText()
gamesplayed = data[1].getText()
wins = data[2].getText()
draws = data[3].getText()
losses = data[4].getText()
goalsfor = data[5].getText()
goalsagainst = data[6].getText()
goaldifference = data[7].getText()
point = data[8].getText()
感谢所有智者和了解大师! 露娜
答案 0 :(得分:0)
该错误消息具有很强的描述性:您正在尝试访问不存在的列表中的索引。
如果data
必须包含至少9个元素(您正在访问索引0到8),则可能应该更改
if len(data) == 0:
continue
到
if len(data) < 9:
continue
因此您可以在这种情况下安全地跳过data
。
答案 1 :(得分:0)
请查看随后的内容
Public Sub DoSomething()
'do stuff...
On Error GoTo CleanFail
For i = a To b
'do more stuff...
Skip:
Next
Exit Sub ' end of happy path
CleanFail: ' begin error handling code
Debug.Print Err.Description; ". Skipping iteration #" & i
Resume Skip ' clears error state and jumps to Skip label
End Sub
在下面屏蔽
if len(data) == 0:
continue