Python网络爬虫:连接超时

时间:2013-01-23 22:58:38

标签: python web-crawler beautifulsoup

我正在尝试实现一个简单的网络抓取工具,我已经编写了一个简单的代码来启动:有两个模块 fetcher.py crawler.py 。以下是文件:

fetcher.py:

    import urllib2
    import re
    def fetcher(s):
    "fetch a web page from a url"

    try:
            req = urllib2.Request(s)
            urlResponse = urllib2.urlopen(req).read()
    except urllib2.URLError as e:
            print e.reason
            return

    p,q = s.split("//")
    d = q.split("/")
    fdes = open(d[0],"w+")
    fdes.write(str(urlResponse))
    fdes.seek(0)
    return fdes



    if __name__ == "__main__":
    defaultSeed = "http://www.python.org"
    print fetcher(defaultSeed)

crawler.py:

from bs4 import BeautifulSoup
import re
from fetchpage import fetcher    

usedLinks = open("Used","a+")
newLinks = open("New","w+")

newLinks.seek(0)

def parse(fd,var=0):
        soup = BeautifulSoup(fd)
        for li in soup.find_all("a",href=re.compile("http")):
                newLinks.seek(0,2)
                newLinks.write(str(li.get("href")).strip("/"))
                newLinks.write("\n")

        fd.close()
        newLinks.seek(var)
        link = newLinks.readline().strip("\n")

        return str(link)


def crawler(seed,n):
        if n == 0:
                usedLinks.close()
                newLinks.close()
                return
        else:
                usedLinks.write(seed)
                usedLinks.write("\n")
                fdes = fetcher(seed)
                newSeed = parse(fdes,newLinks.tell())
                crawler(newSeed,n-1)

if __name__ == "__main__":
        crawler("http://www.python.org/",7)

问题在于,当我运行 crawler.py 时,它适用于前4-5个链接,然后它会挂起,一分钟后会出现以下错误:

[Errno 110] Connection timed out
   Traceback (most recent call last):
  File "crawler.py", line 37, in <module>
    crawler("http://www.python.org/",7)
  File "crawler.py", line 34, in crawler
    crawler(newSeed,n-1)        
 File "crawler.py", line 34, in crawler
    crawler(newSeed,n-1)        
  File "crawler.py", line 34, in crawler
    crawler(newSeed,n-1)        
  File "crawler.py", line 34, in crawler
    crawler(newSeed,n-1)        
  File "crawler.py", line 34, in crawler
    crawler(newSeed,n-1)        
  File "crawler.py", line 33, in crawler
    newSeed = parse(fdes,newLinks.tell())
  File "crawler.py", line 11, in parse
    soup = BeautifulSoup(fd)
  File "/usr/lib/python2.7/dist-packages/bs4/__init__.py", line 169, in __init__
    self.builder.prepare_markup(markup, from_encoding))
  File "/usr/lib/python2.7/dist-packages/bs4/builder/_lxml.py", line 68, in     prepare_markup
    dammit = UnicodeDammit(markup, try_encodings, is_html=True)
  File "/usr/lib/python2.7/dist-packages/bs4/dammit.py", line 191, in __init__
    self._detectEncoding(markup, is_html)
  File "/usr/lib/python2.7/dist-packages/bs4/dammit.py", line 362, in _detectEncoding
    xml_encoding_match = xml_encoding_re.match(xml_data)
TypeError: expected string or buffer

任何人都可以帮我解决这个问题,我是python的新手,我无法在一段时间后找出 连接超时 的原因?

2 个答案:

答案 0 :(得分:0)

Connection Timeout并非特定于python,它只是意味着您向服务器发出了请求,并且服务器在您的应用程序愿意等待的时间内没有响应。

可能发生这种情况的原因很可能是python.org可能有一些机制来检测何时从脚本获取多个请求,并且可能在4-5个请求之后完全停止提供页面。除了在不同的网站上试用你的脚本之外,你无法做任何事情来避免这种情况。

答案 1 :(得分:0)

您可以尝试使用代理,以避免在多个请求中检测到,如上所述。您可能需要查看此答案,以了解如何使用代理发送urllib请求:How to open website with urllib via Proxy - Python