我是scrapy的新手。我在scrapy spider中使用了自定义代理,但是我发现是否使用 request.meta [“ proxies”],spider可以正常工作,而不是使用request.meta ['proxy']。这与this answer
不同如果我使用request.meta ['proxy'],这是我的DEBUG消息的一部分。
2018-09-07 15:48:45 [scrapy.core.engine] INFO: Spider opened
2018-09-07 15:48:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-09-07 15:48:45 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-09-07 15:49:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-09-07 15:50:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-09-07 15:51:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-09-07 15:51:45 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.example.com/robots.txt> (failed 1 times): User timeout caused connection failure: Getting https://www.example.com/robots.txt took longer than 180.0 seconds..
2018-09-07 15:52:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
我的拼凑版本
Scrapy : 1.5.1
lxml : 3.7.2.0
libxml2 : 2.9.4
cssselect : 1.0.3
parsel : 1.5.0
w3lib : 1.19.0
Twisted : 18.7.0
Python : 3.6.5 (v3.6.5:f59c0932b4, Mar 28 2018, 17:00:18) [MSC v.1900 64 bit (AMD64)]
pyOpenSSL : 18.0.0 (OpenSSL 1.1.0h 27 Mar 2018)
cryptography : 2.3
Platform : Windows-10-10.0.17134-SP0
更新:我已经解决了之前的问题。但是我不知道为什么我的meta ['proxy']错误,并且我的免费代理使用requests.get('https://www.example.com/', proxies={"http": "http://{}".format(proxy)})
,它可以正常工作并返回<Response [200]>
,那么我的代码怎么了?
我的设置:
DOWNLOADER_MIDDLEWARES = {
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware':135,
'ip_proxy.middlewares.CustomProxyMiddleware':125
}
我的蜘蛛:
def start_requests(self):
yield scrapy.Request(url="https://www.example.com",callback=self.parse_first)
我的CustomProxyMiddleware
class CustomProxyMiddleware(object):
def __init__(self, settings):
pass
def process_request(self, request, spider):
request.meta['proxy'] = "https://60.169.1.145:808"
@classmethod
def from_crawler(cls, crawler):
return cls(crawler.settings)
答案 0 :(得分:2)
要通过代理服务器发送请求,应使用meta['proxy']
。看起来您的代理服务器有问题,这就是为什么它无法抓取页面并导致超时错误的原因。这也可能是因为您使用了免费代理。
您的Spider与meta['proxies']
配合使用的原因是,设置此元素不会影响任何内容,并且请求是从您的本地IP发送的。