例如,如何捕获python和urllib(2)中页面的404和403错误?
有没有大型包装器的快速方法吗?
添加了信息(堆栈跟踪):
Traceback (most recent call last):
File "test.py", line 3, in <module>
page = urllib2.urlopen("http://localhost:4444")
File "/usr/lib/python2.6/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.6/urllib2.py", line 391, in open
response = self._open(req, data)
File "/usr/lib/python2.6/urllib2.py", line 409, in _open
'_open', req)
File "/usr/lib/python2.6/urllib2.py", line 369, in _call_chain
result = func(*args)
File "/usr/lib/python2.6/urllib2.py", line 1161, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.6/urllib2.py", line 1136, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error [Errno 111] Connection refused>
答案 0 :(得分:22)
import urllib2
try:
page = urllib2.urlopen("some url")
except urllib2.HTTPError, err:
if err.code == 404:
print "Page not found!"
elif err.code == 403:
print "Access denied!"
else:
print "Something happened! Error code", err.code
except urllib2.URLError, err:
print "Some other error happened:", err.reason
在您的情况下,错误发生在可以构建HTTP连接之前 - 因此您需要添加另一个捕获URLError
的错误处理程序。但这与404或403错误无关。
答案 1 :(得分:5)
req = urllib2.Request('url')
>>> try:
>>> urllib2.urlopen(req)
>>> except urllib2.URLError, e:
>>> print e.code
>>> print e.read()