我有一个抓取网站的脚本。 直到今天它完美无缺, 但是现在它没有这样做。
它会出现以下错误:
Connection Aborted Error(10060 ' A connection attempt failed becvause the connected party did not properly respond after a period of time, or established a connection failed because connected host has failed to respond'
我一直在寻找答案和设置,但我无法弄清楚如何解决这个问题......
在IE中,我没有使用任何代理(连接 - > Lan设置 - >代理=禁用)
它破坏了这段代码,有些是第一次运行,有时是第二次运行......等等
def geturls(functionurl, runtime):
startCrawl = requests.get(functionurl, headers=headers)
mainHtml = BeautifulSoup(startCrawl.content, 'html.parser')
mainItems = mainHtml.find("div",{"id": "js_multiselect_results"})
for tag in mainItems.findAll('a', href=True):
tag['href'] = urlparse.urljoin(url,tag['href'])
if shorturl in tag['href'] and tag['href'] not in visited:
if any(x in tag['href'] for x in keepout):
falseurls.append(tag['href'])
elif tag['href'] in urls:
doubleurls.append(tag['href'])
else:
urlfile.write(tag['href'] + "\n")
urls.append(tag['href'])
totalItemsStart = str(mainHtml.find("span",{"id": "sab_header_results_size"}))
if runtime == 1:
totalnumberofitems[0] = totalItemsStart
totalnumberofitems[0] = strip_tags(totalnumberofitems[0])
return totalnumberofitems
我该如何解决这个问题?
答案 0 :(得分:1)
尝试增加timeout
方法的requests.get
参数:
requests.get(functionurl, headers=headers, timeout=5)
但可能性是您的脚本被服务器阻止以防止报废尝试。如果是这种情况,您可以尝试通过设置适当的标头来伪造Web浏览器。
{"User-Agent": "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.8) Gecko/20100722 Firefox/3.6.8 GTB7.1 (.NET CLR 3.5.30729)", "Referer": "http://example.com"}