Python urllib在HTTP或URL错误上跳过URL

时间:2012-05-10 20:34:01

标签: python lxml urllib

如果连接超时或无效/ 404,我如何修改脚本以跳过URL?

的Python

#!/usr/bin/python

#parser.py: Downloads Bibles and parses all data within <article> tags.

__author__      = "Cody Bouche"
__copyright__   = "Copyright 2012 Digital Bible Society"

from BeautifulSoup import BeautifulSoup
import lxml.html as html
import urlparse
import os, sys
import urllib2
import re

print ("downloading and parsing Bibles...")
root = html.parse(open('links.html'))
for link in root.findall('//a'):
    url = link.get('href')
    name = urlparse.urlparse(url).path.split('/')[-1]
    dirname = urlparse.urlparse(url).path.split('.')[-1]
    f = urllib2.urlopen(url)
    s = f.read()
    if (os.path.isdir(dirname) == 0):
        os.mkdir(dirname)
    soup = BeautifulSoup(s)
    articleTag = soup.html.body.article
    converted = str(articleTag)
    full_path = os.path.join(dirname, name)
    open(full_path, 'wb').write(converted)
    print(name)
print("DOWNLOADS COMPLETE!")

2 个答案:

答案 0 :(得分:2)

要将超时应用于您的请求,请将timeout变量添加到您对urlopen的调用中。来自docs

  

可选的timeout参数指定超时(以秒为单位)   阻塞操作,如连接尝试(如果未指定,则   将使用全局默认超时设置)。这实际上只有作用   用于HTTP,HTTPS和FTP连接。

请参阅本指南中有关how to handle exceptions with urllib2的部分。实际上我发现整个指南非常有用。

request timeout例外代码为408。如果您要处理超时异常,请将其包装起来:

try:
    response = urlopen(req, 3) # 3 seconds
except URLError, e:
    if hasattr(e, 'code'):
        if e.code==408:
            print 'Timeout ', e.code
        if e.code==404:
            print 'File Not Found ', e.code
        # etc etc

答案 1 :(得分:1)

尝试将您的urlopen行放在try catch语句下。仔细看看:

docs.python.org/tutorial/errors.html第8.3节

查看不同的异常,当遇到一个时,只需使用语句continue

重新启动循环