随机“IndexError:列表索引超出范围”

时间:2013-01-24 20:07:12

标签: python screen-scraping

我正在尝试通过Javascript抓取返回其数据的网站。我使用BeautifulSoup编写的代码运行得很好,但在抓取过程中的随机点我得到以下错误:

Traceback (most recent call last):
File "scraper.py", line 48, in <module>
accessible = accessible[0].contents[0]
IndexError: list index out of range

有时候我可以刮掉4个网址,有时候是15个,但是在某些时候脚本最终会失败并给我上面的错误。我发现失败背后没有任何模式,所以我真的不知所措 - 我做错了什么?

from bs4 import BeautifulSoup
import urllib
import urllib2
import jabba_webkit as jw
import csv
import string
import re
import time

countries = csv.reader(open("countries.csv", 'rb'), delimiter=",")
database = csv.writer(open("herdict_database.csv", 'w'), delimiter=',')

basepage = "https://www.herdict.org/explore/"
session_id = "indepth;jsessionid=C1D2073B637EBAE4DE36185564156382"
ccode = "#fc=IN"
end_date = "&fed=12/31/"
start_date = "&fsd=01/01/"

year_range = range(2009, 2011)
years = [str(year) for year in year_range]

def get_number(var):
    number = re.findall("(\d+)", var)

    if len(number) > 1:
        thing = number[0] + number[1]
    else:
        thing = number[0]

    return thing

def create_link(basepage, session_id, ccode, end_date, start_date, year):
    link = basepage + session_id + ccode + end_date + year + start_date + year
    return link



for ccode, name in countries:
    for year in years:
        link = create_link(basepage, session_id, ccode, end_date, start_date, year)
        print link
        html = jw.get_page(link)
        soup = BeautifulSoup(html, "lxml")

        accessible = soup.find_all("em", class_="accessible")
        inaccessible = soup.find_all("em", class_="inaccessible")

        accessible = accessible[0].contents[0]
        inaccessible = inaccessible[0].contents[0]

        acc_num = get_number(accessible)
        inacc_num = get_number(inaccessible)

        print acc_num
        print inacc_num
        database.writerow([name]+[year]+[acc_num]+[inacc_num])

        time.sleep(2)

2 个答案:

答案 0 :(得分:4)

您需要在代码中添加错误处理。在抓取很多网站时,有些网站会出现格式错误,或者某种方式被破坏。当发生这种情况时,您将尝试操纵空对象。

查看代码,找到您假设其工作的所有假设,并检查错误。

对于那个具体案例,我会这样做:

if not inaccessible or not accessible:
    # malformed page
    continue

答案 1 :(得分:3)

soup.find_all("em", class_="accessible")可能会返回一个空列表。你可以尝试:

if accessible:
    accessible = accessible[0].contents[0]

或更一般地说:

if accessibe and inaccesible:
    accessible = accessible[0].contents[0]
    inaccessible = inaccessible[0].contents[0]
else:
    print 'Something went wrong!'
    continue