好吧,当我测试我的脚本时,它可以工作3分钟,如25分钟。 有时,我发现了这个错误。
File "test.py", line 75, in free
srch = br1.open("url")
File "/usr/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 203,
in open
return self._mech_open(url, data, timeout=timeout)
File "/usr/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 230,
in _mech_open
response = UserAgentBase.open(self, request, data)
File "/usr/lib/python2.7/dist-packages/mechanize/_opener.py", line 193, in
open
response = urlopen(self, req, data)
File "/usr/lib/python2.7/dist-packages/mechanize/_urllib2_fork.py", line
344, in _open
'_open', req)
File "/usr/lib/python2.7/dist-packages/mechanize/_urllib2_fork.py", line
332, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/dist-packages/mechanize/_urllib2_fork.py", line
1170, in https_open
return self.do_open(conn_factory, req)
File "/usr/lib/python2.7/dist-packages/mechanize/_urllib2_fork.py", line
1118, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error [Errno 0] Error>
要解决它,我只需重新启动脚本。我认为这个错误可能来自网络。 是否有可能绕过它?或者强制我的脚本重做请求?
答案 0 :(得分:1)
几乎取自文档...
from urllib.request import Request, urlopen
from urllib.error import URLError
def download_url(url, attempts):
for attempt in range(attempts):
try:
req = Request(url)
response = urlopen(req)
except URLError as e:
if hasattr(e, 'reason'):
print('Reason: ', e.reason)
elif hasattr(e, 'code'):
print('Error code: ', e.code)
else:
return response.read()
return None
print(download_url('http://www.google.com', 3))
答案 1 :(得分:-1)
while 1:
try:
srch = br1.open("url")
break
except urllib2.URLError:
pass
此问题已经在这里得到解答:Python urllib2 URLError HTTP status code.