使用随机代理池进行Scrapy以避免被禁止

时间:2015-05-20 09:30:09

标签: http https proxy scrapy user-agent

我对scrapy很新(我的背景不是信息学)。我有一个网站,我无法访问我的本地IP,因为我被禁止,我可以使用浏览器上的VPN服务访问它。我的蜘蛛能够抓取它,我设置了一个我在这里找到的代理池http://proxylist.hidemyass.com/。有了这个,我的蜘蛛能够爬行并刮掉物品,但我怀疑是否我必须每天更改代理池列表?抱歉,如果我的问题是愚蠢的......

这里是我的settings.py:

BOT_NAME = 'reviews'

SPIDER_MODULES = ['reviews.spiders']
NEWSPIDER_MODULE = 'reviews.spiders'
DOWNLOAD_DELAY = 1
RANDOMIZE_DOWNLOAD_DELAY = True

DOWNLOADER_MIDDLEWARES = {
        'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware':None, # to avoid the raise IOError, 'Not a gzipped file' exceptions.IOError: Not a gzipped file
        'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware' : None,
        'reviews.rotate_useragent.RotateUserAgentMiddleware' :400,
        'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 110, 
        'reviews.middlewares.ProxyMiddleware': 100,
    }

PROXIES = [{'ip_port': '168.63.249.35:80', 'user_pass': ''},
           {'ip_port': '162.17.98.242:8888', 'user_pass': ''},
           {'ip_port': '70.168.108.216:80', 'user_pass': ''},
           {'ip_port': '45.64.136.154:8080', 'user_pass': ''},
           {'ip_port': '149.5.36.153:8080', 'user_pass': ''},
           {'ip_port': '185.12.7.74:8080', 'user_pass': ''},
           {'ip_port': '150.129.130.180:8080', 'user_pass': ''},
           {'ip_port': '185.22.9.145:8080', 'user_pass': ''},
           {'ip_port': '200.20.168.135:80', 'user_pass': ''},
           {'ip_port': '177.55.64.38:8080', 'user_pass': ''},]

# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'reviews (+http://www.yourdomain.com)'

这里是我的middlewares.py:

import base64
import random
from settings import PROXIES

class ProxyMiddleware(object):
    def process_request(self, request, spider):
        proxy = random.choice(PROXIES)
        if proxy['user_pass'] is not None:
            request.meta['proxy'] = "http://%s" % proxy['ip_port']
            encoded_user_pass = base64.encodestring(proxy['user_pass'])
            request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass            
        else:
            request.meta['proxy'] = "http://%s" % proxy['ip_port']

另一个问题:如果我有一个https的网站我应该只有https的代理池列表吗?然后是另一个函数类HTTPSProxyMiddleware(object),它会返回一个列表HTTPS_PROXIES?

我的rotate_useragent.py:

import random
from scrapy.contrib.downloadermiddleware.useragent import UserAgentMiddleware

class RotateUserAgentMiddleware(UserAgentMiddleware):
    def __init__(self, user_agent=''):
        self.user_agent = user_agent

    def process_request(self, request, spider):
        ua = random.choice(self.user_agent_list)
        if ua:
            request.headers.setdefault('User-Agent', ua)

    #the default user_agent_list composes chrome,I E,firefox,Mozilla,opera,netscape
    #for more user agent strings,you can find it in http://www.useragentstring.com/pages/useragentstring.php
    user_agent_list = [\
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1"\
        "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",\
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",\
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",\
        "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",\
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",\
        "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",\
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",\
        "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",\
        "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",\
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",\
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",\
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",\
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",\
        "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",\
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",\
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",\
        "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
       ]

在settings.py中的另一个问题和最后一个问题(抱歉再次是一个愚蠢的问题)有一个注释的默认部分#Crawl负责任地通过在用户代理上识别您自己(和您的网站)     #USER_AGENT ='评论(+ http://www.yourdomain.com)'我应该取消评论并提交我的个人信息吗?或者只是这样离开?我想有效地爬行,但要考虑好的政策和良好的习惯,以避免可能的禁令问题......

我问这一切都是因为有了这些东西我的蜘蛛开始抛出像

这样的错误
twisted.internet.error.TimeoutError: User timeout caused connection failure: Getting http://www.example.com/browse/?start=884 took longer than 180.0 seconds.

Error downloading <GET http://www.example.com/article/2883892/x-review.html>: [<twisted.python.failure.Failure <class 'twisted.internet.error.ConnectionLost'>>]

Error downloading <GET http://www.example.com/browse/?start=6747>: TCP connection timed out: 110: Connection timed out.

非常感谢你的帮助和时间。

2 个答案:

答案 0 :(得分:1)

  1. 对此没有正确的答案。有些代理并不总是可用,因此您必须立即检查它们。此外,如果每次使用的服务器都使用相同的代理,也可能会阻止其IP,但这取决于此服务器的安全机制。
  2. 是的,因为您不知道池中的所有代理是否都支持HTTPS。或者,您可以只有一个池,并为每个代理添加一个字段,指示其HTTPS支持。
  3. 在您的设置中,您将禁用用户代理中间件:'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware' : None。 USER_AGENT设置不起任何作用。

答案 1 :(得分:1)

已经有一个库可以做到这一点。 https://github.com/aivarsk/scrapy-proxies

请从那里下载。它尚未在pypi.org中,因此您无法使用pipeasy_install轻松安装。