scrapy crawler捕获异常读取实例数据

时间:2015-07-05 16:44:31

标签: python web-crawler scrapy

我是python的新手,想要使用scrapy构建一个Web爬虫。我查看了http://blog.siliconstraits.vn/building-web-crawler-scrapy/中的教程。蜘蛛代码如下:

from scrapy.spider         import BaseSpider
from scrapy.selector         import HtmlXPathSelector
from nettuts.items        import NettutsItem
from scrapy.http        import Request

class MySpider(BaseSpider):
     name         = "nettuts"
     allowed_domains    = ["net.tutsplus.com"]
     start_urls    = ["http://net.tutsplus.com/"]

def parse(self, response):
    hxs     = HtmlXPathSelector(response)
    titles     = hxs.select('//h1[@class="post_title"]/a/text()').extract()
    for title in titles:
        item = NettutsItem()
        item["title"] = title
        yield item

使用命令行启动spider时:scrapy crawl nettus,它有以下错误:

[boto] DEBUG: Retrieving credentials from metadata server.
2015-07-05 18:27:17 [boto] ERROR: Caught exception reading instance data

Traceback (most recent call last):
  File "/anaconda/lib/python2.7/site-packages/boto/utils.py", line 210, in retry_url
    r = opener.open(req, timeout=timeout)

 File "/anaconda/lib/python2.7/urllib2.py", line 431, in open
response = self._open(req, data)

 File "/anaconda/lib/python2.7/urllib2.py", line 449, in _open
'_open', req)

 File "/anaconda/lib/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)

 File "/anaconda/lib/python2.7/urllib2.py", line 1227, in http_open
return self.do_open(httplib.HTTPConnection, req)

File "/anaconda/lib/python2.7/urllib2.py", line 1197, in do_open
raise URLError(err)

URLError: <urlopen error [Errno 65] No route to host>
2015-07-05 18:27:17 [boto] ERROR: Unable to read instance data, giving up

真的不知道出了什么问题。希望有人可以提供帮助

2 个答案:

答案 0 :(得分:28)

settings.py文件中的

:添加以下代码设置:

DOWNLOAD_HANDLERS = {'s3':无,}

答案 1 :(得分:0)

重要信息是:

URLError: <urlopen error [Errno 65] No route to host>

这是试图告诉您,您的计算机并不知道如何与您尝试抓取的网站进行通信。您是否能够从您尝试运行此python的计算机上正常访问该站点(即在Web浏览器中)?