通过代理使用TLSv1.0的严重握手失败

时间:2018-09-26 13:01:08

标签: python ssl web-scraping proxy scrapy

我目前正在尝试使用Scrapy开发一个网络爬虫,以爬取一个无法在公司外部访问的网站。问题是我必须通过代理才能成功,而且我可以在“ http://quotes.toscrape.com”上运行我的蜘蛛。问题是我应该运行它的网站正在使用TLS 1.0,并且我尝试了几种不起作用的解决方案:

第一个解决方案:

new.example.com

输出:

import scrapy
from w3lib.http import basic_auth_header

class QuotesSpider(scrapy.Spider):
   name = "quotes"
   def start_requests(self):
       urls = [
           'https://10.20.106.170/page.aspx'
       ]
       for url in urls:
           yield scrapy.Request(url=url, callback=self.parse,
            meta={'proxy': 'http://<my_proxy_url>:<my_proxy_port>'},
            headers={'Proxy-Authorization': basic_auth_header('<my_id>', '<my_pwd>')})

   def parse(self, response):
       page = response.url.split("/")[-2]
       filename = 'quotes-%s.html' % page
       with open(filename, 'wb') as f:
           f.write(response.body)
       self.log('Saved file %s' % filename)

发现网站使用TLS 1.0后,我尝试添加如下自定义设置:

    2018-09-26 14:38:00 [twisted] CRITICAL: Error during info_callback
Traceback (most recent call last):
  File "C:\Users\1etiennr\Anaconda\lib\site-packages\twisted\protocols\tls.py", line 315, in dataReceived
    self._checkHandshakeStatus()
  File "C:\Users\1etiennr\Anaconda\lib\site-packages\twisted\protocols\tls.py", line 235, in _checkHandshakeStatus
    self._tlsConnection.do_handshake()
  File "C:\Users\1etiennr\Anaconda\lib\site-packages\OpenSSL\SSL.py", line 1906, in do_handshake
    result = _lib.SSL_do_handshake(self._ssl)
  File "C:\Users\1etiennr\Anaconda\lib\site-packages\OpenSSL\SSL.py", line 1288, in wrapper
    callback(Connection._reverse_mapping[ssl], where, return_code)
--- <exception caught here> ---
  File "C:\Users\1etiennr\Anaconda\lib\site-packages\twisted\internet\_sslverify.py", line 1102, in infoCallback
    return wrapped(connection, where, ret)
  File "C:\Users\1etiennr\Anaconda\lib\site-packages\scrapy\core\downloader\tls.py", line 67, in _identityVerifyingInfoCallback
    verifyHostname(connection, self._hostnameASCII)
  File "C:\Users\1etiennr\Anaconda\lib\site-packages\service_identity\pyopenssl.py", line 47, in verify_hostname
    cert_patterns=extract_ids(connection.get_peer_certificate()),
  File "C:\Users\1etiennr\Anaconda\lib\site-packages\service_identity\pyopenssl.py", line 75, in extract_ids
    ids.append(DNSPattern(n.getComponent().asOctets()))
  File "C:\Users\1etiennr\Anaconda\lib\site-packages\service_identity\_common.py", line 156, in __init__
    "Invalid DNS pattern {0!r}.".format(pattern)
service_identity.exceptions.CertificateError: Invalid DNS pattern '10.20.106.170'.

2018-09-26 14:38:00 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET https://10.20.106.170/link.aspx> (failed 3 times): [<twisted.python.failure.Failure service_identity.exceptions.CertificateError: Invalid DNS pattern '10.20.106.170'.>]
2018-09-26 14:38:00 [scrapy.core.scraper] ERROR: Error downloading <GET https://10.20.106.170/link.aspx>: [<twisted.python.failure.Failure service_identity.exceptions.CertificateError: Invalid DNS pattern '10.20.106.170'.>]
2018-09-26 14:38:00 [scrapy.core.engine] INFO: Closing spider (finished)
2018-09-26 14:38:00 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 6,
 'downloader/exception_type_count/twisted.web._newclient.ResponseNeverReceived': 6,
 'downloader/request_bytes': 1548,
 'downloader/request_count': 6,
 'downloader/request_method_count/GET': 6,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2018, 9, 26, 12, 38, 0, 338000),
 'log_count/CRITICAL': 6,
 'log_count/DEBUG': 7, 

不幸的是,这样做之后,我遇到了同样的错误,而且我不知道该怎么做才能解开自己。

如果您有想法,我会很乐意接受!

预先感谢

2 个答案:

答案 0 :(得分:0)

我相信这是一个bug,并且已经在solved的易用版本上1.5.1

答案 1 :(得分:0)

好吧,在网上进行了更多研究之后,我发现git issue遇到了类似的问题。将Scrapy更新到1.5.1,并将Twisted降级到16.6.0可以达到目的。我现在有另一个问题,但是这个问题似乎已经解决。