Scrapy扭曲连接以非干净的方式丢失。没有代理人。已经尝试过标题

时间:2017-08-28 17:38:54

标签: python web-scraping scrapy twisted

我正在尝试抓取此网站

https://www5.apply2jobs.com/jupitermed/ProfExt/index.cfm?fuseaction=mExternal.searchJobs

使用scrapy并不断收到扭曲的请求/断开连接错误。我没有使用代理,我尝试设置用户代理并实际根据this answer设置所有标头

这是生成请求的代码

def start_requests(self):
    url = 'https://www5.apply2jobs.com/jupitermed/ProfExt/index.cfm?fuseaction=mExternal.searchJobs'

    headers = {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
        'Accept-Encoding': 'gzip, deflate, br',
        'Accept-Language': 'en-US,en;q=0.8',
        'Connection': 'keep-alive',
        'DNT': '1',
        'Host': 'www5.apply2jobs.com',
        'Referer': 'https://www5.apply2jobs.com/jupitermed/ProfExt/index.cfm?fuseaction=mExternal.showJob&RID=2524&CurrentPage=2',
        'Upgrade-Insecure-Requests': '1',
        'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36'
    }

    yield Request(url=url, headers=headers, callback=self.parse)

这是我的追溯:

2017-08-28 13:34:13 [scrapy.core.engine] INFO: Spider opened
2017-08-28 13:34:13 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-08-28 13:34:13 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-08-28 13:34:13 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www5.apply2jobs.com/robots.txt> (failed 1 times): [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>]
2017-08-28 13:34:13 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www5.apply2jobs.com/robots.txt> (failed 2 times): [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>]
2017-08-28 13:34:13 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET https://www5.apply2jobs.com/robots.txt> (failed 3 times): [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>]
2017-08-28 13:34:13 [scrapy.downloadermiddlewares.robotstxt] ERROR: Error downloading <GET https://www5.apply2jobs.com/robots.txt>: [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>]
ResponseNeverReceived: [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>]
2017-08-28 13:34:13 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www5.apply2jobs.com/jupitermed/ProfExt/index.cfm?fuseaction=mExternal.searchJobs> (failed 1 times): [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>]
2017-08-28 13:34:13 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www5.apply2jobs.com/jupitermed/ProfExt/index.cfm?fuseaction=mExternal.searchJobs> (failed 2 times): [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>]
2017-08-28 13:34:13 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET https://www5.apply2jobs.com/jupitermed/ProfExt/index.cfm?fuseaction=mExternal.searchJobs> (failed 3 times): [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>]
2017-08-28 13:34:13 [scrapy.core.scraper] ERROR: Error downloading <GET https://www5.apply2jobs.com/jupitermed/ProfExt/index.cfm?fuseaction=mExternal.searchJobs>: [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>]
2017-08-28 13:34:13 [scrapy.core.engine] INFO: Closing spider (finished)

1 个答案:

答案 0 :(得分:0)

所以感谢关于github的讨论以及对我的问题的评论,看起来最好的做法是使用带密码的virtualenv&lt; 2

感谢@paultrmbrth帮助这么多

  

我尝试使用enable-weak-ssl-ciphers编译OpenSSL 1.1.0f来构建静态轮,   但出于某种原因,我没有设法支持TLS_RSA_WITH_RC4_128_MD5(如ssllabs.com报道)。我显然是在使用OpenSSL构建知识。   因此,我看到的唯一选择是使用带有加密&lt; 2&#39;加密技术的virtualenv。用于抓取该网站。