无法在Scrapy项目

时间:2017-11-07 10:55:49

标签: python web-scraping proxy scrapy web-crawler

我一直在尝试抓取一个看似已识别并阻止我的IP并且正在抛出 429太多请求响应的网站。

我从此链接安装了scrapy-proxies:https://github.com/aivarsk/scrapy-proxies并按照给定的说明进行操作。 我从这里得到了一个代理列表:http://www.gatherproxy.com/现在这里是我的settings.py和proxylist.txt的样子:

Settings.py

BOT_NAME = 'project'
SPIDER_MODULES = ['project.spiders']
NEWSPIDER_MODULE = 'project.spiders'
# Retry many times since proxies often fail
RETRY_TIMES = 10
# Retry on most error codes since proxies fail for different reasons
RETRY_HTTP_CODES = [429, 500, 503, 504, 400, 403, 404, 408]

DOWNLOADER_MIDDLEWARES = {
    'scrapy.downloadermiddlewares.retry.RetryMiddleware': 90,
    'scrapy_proxies.RandomProxy': 100,
    'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 110,
}

PROXY_LIST = "filepath\proxylist.txt"
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36'
CONCURRENT_REQUESTS = 1
DOWNLOAD_DELAY = 2

PROXY_MODE = 0
DOWNLOAD_HANDLERS = {'s3': None}

EXTENSIONS = {
   'scrapy.telnet.TelnetConsole': None
}

proxylist.txt

http://195.208.172.20:8080
http://154.119.56.179:9999
http://124.12.50.43:8088
http://61.7.168.232:52136
http://122.193.188.236:8118

然而,当我运行我的爬虫时,我收到以下错误:

[scrapy.proxies] DEBUG: Proxy user pass not found

我尝试在Google上搜索特定错误但找不到任何解决方案。

帮助将受到高度赞赏。非常感谢。

1 个答案:

答案 0 :(得分:3)

我建议您创建自己的中间件以指定IP:PORT,并将此proxies.py中间件文件放在项目的middleware文件夹中:

class ProxiesMiddleware(object):
    def __init__(self, settings):
        pass

    @classmethod
    def from_crawler(cls, crawler):
        return cls(crawler.settings)

    def process_request(self, request, spider):
        request.meta['proxy'] = "http://IP:PORT"

ProxiesMiddleware中间件行添加到您的settings.py

DOWNLOADER_MIDDLEWARES = {
   'yourproject.middleware.proxies.ProxiesMiddleware':400,
}