我通过这个程序浏览了一大堆网址,我使用了多个线程。但是我使用的第一个版本
myreq.fp._sock.fp._sock.shutdown(socket.SHUT_RDWR)
关闭连接似乎不是一直下载所有数据。 另一方面,当我改为使用
时#myreq.fp._sock.recv=None # hacky avoidance
有时与某些网站的连接会挂起很长时间。超过一分钟。
代码:
useragent = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11'}
request = urllib2.Request(url,None,useragent)
try :
myreq = urllib2.urlopen(request, timeout = threadtimeout)
html_code = myreq.read()
myreq.fp._sock.fp._sock.shutdown(socket.SHUT_RDWR)
#myreq.fp._sock.recv=None # hacky avoidance
myreq.close()
except Exception :
html_code = ""
答案 0 :(得分:0)
import socket
timeout = 10
socket.setdefaulttimeout(timeout)
myreq = urllib2.urlopen(request)
html_code = myreq.read()
答案 1 :(得分:0)
当请求自动处理连接关闭时,这会更好用:
useragent = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11'}
request = urllib2.Request(url,None,useragent)
try :
import requests
response = requests.get(url, headers=useragent,timeout = threadtimeout)
html_code = response.text