Python Scraper - 如果目标是404'd,则套接字错误会破坏脚本

时间:2012-01-14 04:42:50

标签: python sockets beautifulsoup

在构建网络剪贴板以编译数据并输出为XLS格式时遇到错误;当再次测试我希望从中删除的域列表时,程序在收到套接字错误时会出错。希望找到一个'if'语句,它将解析一个破碎的网站并继续我的while循环。有任何想法吗?

workingList = xlrd.open_workbook(listSelection)
workingSheet = workingList.sheet_by_index(0)
destinationList = xlwt.Workbook()
destinationSheet = destinationList.add_sheet('Gathered')
startX = 1
startY = 0
while startX != 21:
    workingCell = workingSheet.cell(startX,startY).value
    print ''
    print ''
    print ''
    print workingCell
    #Setup
    preSite = 'http://www.'+workingCell
    theSite = urlopen(preSite).read()
    currentSite = BeautifulSoup(theSite)
    destinationSheet.write(startX,0,workingCell)

这是错误:

Traceback (most recent call last):
  File "<pyshell#2>", line 1, in <module>
    homeMenu()
  File "C:\Python27\farming.py", line 31, in homeMenu
    openList()
  File "C:\Python27\farming.py", line 79, in openList
    openList()
  File "C:\Python27\farming.py", line 83, in openList
    openList()
  File "C:\Python27\farming.py", line 86, in openList
    homeMenu()
  File "C:\Python27\farming.py", line 34, in homeMenu
    startScrape()
  File "C:\Python27\farming.py", line 112, in startScrape
    theSite = urlopen(preSite).read()
  File "C:\Python27\lib\urllib.py", line 84, in urlopen
    return opener.open(url)
  File "C:\Python27\lib\urllib.py", line 205, in open
    return getattr(self, name)(url)
  File "C:\Python27\lib\urllib.py", line 342, in open_http
    h.endheaders(data)
  File "C:\Python27\lib\httplib.py", line 951, in endheaders
    self._send_output(message_body)
  File "C:\Python27\lib\httplib.py", line 811, in _send_output
    self.send(msg)
  File "C:\Python27\lib\httplib.py", line 773, in send
    self.connect()
  File "C:\Python27\lib\httplib.py", line 754, in connect
    self.timeout, self.source_address)
  File "C:\Python27\lib\socket.py", line 553, in create_connection
    for res in getaddrinfo(host, port, 0, SOCK_STREAM):
IOError: [Errno socket error] [Errno 11004] getaddrinfo failed

1 个答案:

答案 0 :(得分:5)

嗯,这看起来像我的互联网连接断开时的错误。 HTTP 404错误是您获得连接但无法找到指定的URL时所获得的错误。

没有if语句来处理异常;你需要使用try/except construct.

“抓住”它们

更新:以下是演示:

import urllib

def getconn(url):
    try:
        conn = urllib.urlopen(url)
        return conn, None
    except IOError as e:
        return None, e

urls = """
    qwerty
    http://www.foo.bar.net
    http://www.google.com
    http://www.google.com/nonesuch
    """
for url in urls.split():
    print
    print url
    conn, exc = getconn(url)
    if conn:
        print "connected; HTTP response is", conn.getcode()
    else:
        print "failed"
        print exc.__class__.__name__
        print str(exc)
        print exc.args

输出:

qwerty
failed
IOError
[Errno 2] The system cannot find the file specified: 'qwerty'
(2, 'The system cannot find the file specified')

http://www.foo.bar.net
failed
IOError
[Errno socket error] [Errno 11004] getaddrinfo failed
('socket error', gaierror(11004, 'getaddrinfo failed'))

http://www.google.com
connected; HTTP response is 200

http://www.google.com/nonesuch
connected; HTTP response is 404

请注意,到目前为止我们刚刚打开了连接。现在您需要做的是检查HTTP响应代码并确定是否有值得使用conn.read()检索的内容