使用Scrapy与Selenium时连接被拒绝

时间:2013-11-23 15:58:45

标签: python selenium scrapy

我正在尝试使用Selenium的Scrapy来抓取动态生成的javascript内容(http://huati.weibo.com)的页面。我一直拒绝连接,但我不确定它是我正在做的事情还是服务器本身(在中国,所以有可能出现某种防火墙问题?)。

我得到了什么:

    Traceback (most recent call last):
  File "/usr/local/bin/scrapy", line 4, in <module>
    execute()
  File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 142, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 88, in _run_print_help
    func(*a, **kw)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 149, in _run_command
    cmd.run(args, opts)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/commands/crawl.py", line 48, in run
    spider = crawler.spiders.create(spname, **opts.spargs)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/spidermanager.py", line 48, in create
    return spcls(**spider_kwargs)
  File "/opt/bitnami/apps/wordpress/htdocs/data/sina_crawler/sina_crawler/spiders/sspider.py", line 18, in __init__
    self.selenium.start()
  File "/usr/local/lib/python2.7/dist-packages/selenium/selenium.py", line 197, in start
    result = self.get_string("getNewBrowserSession", start_args)
  File "/usr/local/lib/python2.7/dist-packages/selenium/selenium.py", line 231, in get_string
    result = self.do_command(verb, args)
  File "/usr/local/lib/python2.7/dist-packages/selenium/selenium.py", line 220, in do_command
    conn.request("POST", "/selenium-server/driver/", body, headers)
  File "/usr/lib/python2.7/httplib.py", line 958, in request
    self._send_request(method, url, body, headers)
  File "/usr/lib/python2.7/httplib.py", line 992, in _send_request
    self.endheaders(body)
  File "/usr/lib/python2.7/httplib.py", line 954, in endheaders
    self._send_output(message_body)
  File "/usr/lib/python2.7/httplib.py", line 814, in _send_output
    self.send(msg)
  File "/usr/lib/python2.7/httplib.py", line 776, in send
    self.connect()
  File "/usr/lib/python2.7/httplib.py", line 757, in connect
    self.timeout, self.source_address)
  File "/usr/lib/python2.7/socket.py", line 571, in create_connection
    raise err
socket.error: [Errno 111] Connection refused
Exception socket.error: error(111, 'Connection refused') in <bound method SeleniumSpider.__del__ of <SeleniumSpider 'SeleniumSpider' at 0x1e246d0>> ignored

我的代码:

  1 from scrapy.contrib.spiders import CrawlSpider, Rule
  2 from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
  3 from scrapy.selector import HtmlXPathSelector
  4 from scrapy.http import Request
  5 
  6 from selenium import selenium
  7 
  8 
  9 class SeleniumSpider(CrawlSpider):
 10         name = "SeleniumSpider"
 11         allowed_domains = ["weibo.com"]
 12         start_urls = ["http://huati.weibo.com/"]
 13 
 14         def __init__(self):
 15             CrawlSpider.__init__(self)
 16             self.verificationErrors = []
 17             self.selenium = selenium("localhost", 4444, "*firefox", "http://huati.weibo.com")
 18             self.selenium.start()
 19 
 20         def __del__(self):
 21             self.selenium.stop()
 22             print self.verificationErrors
 23             CrawlSpider.__del__(self)
 24 
 25         def parse(self, response):
 26             hxs = HtmlXPathSelector(response)
 27 
 28             sel = self.selenium
 29             sel.open(response.url)
 30 
 31             time.sleep(2.5)
 32 
 33             sites = sel.get_text('//html/body/div/div/div/div/div/div/div/div[@class="interest_topicR"]')
 34             print sites

作为参考,我正在关注此示例代码:http://snipplr.com/view/66998/

1 个答案:

答案 0 :(得分:0)

这很可能是服务器端行为。你有很多要求打他们的网站吗?我的页面没有问题urllib(显然没有JavaScript)所以我怀疑他们是否使用复杂的方法来检测机器人。

我的猜测是你在很短的时间内提出了太多请求。我处理这个问题的方法是抓住ConnectionError,然后使用time.sleep(600)休息一会儿。然后重试连接。您还可以计算ConnectionError被抛出的次数,并在尝试4或5次后放弃。它看起来像这样:

def parse(url, retry=0, max_retry=5):
    try:
        req = sel.open(url)
    except ConnectionError:
        if retry > max_retry: break
        logging.error('Connection error, resting...')
        time.sleep(100)
        self.parse(url, retry+1, max_retry)