如何在Scrapy蜘蛛中使用特定网址的代理?

时间:2018-01-08 08:58:19

标签: python python-3.x scrapy scrapy-spider

我想仅为少数特定域使用代理。我检查thisthisthis。如果我理解正确使用中间件设置代理将为所有请求设置代理。

如何在发送蜘蛛请求之前为特定网址设置代理?

目前我的蜘蛛在以下实施方面正常运作:

CoreSpider.py

class CoreSpider(scrapy.Spider):
    name = "final"
    def __init__(self):
        self.start_urls = self.read_url()
        self.rules = (
            Rule(
                LinkExtractor(
                    unique=True,
                ),
                callback='parse',
                follow=True
            ),
        )


    def read_url(self):
        urlList = []
        for filename in glob.glob(os.path.join("/root/Public/company_profiler/seed_list", '*.list')):
            with open(filename, "r") as f:
                for line in f.readlines():
                    url = re.sub('\n', '', line)
                    if "http" not in url:
                        url = "http://" + url
                    # print(url)
                    urlList.append(url)

        return urlList

    def parse(self, response):
        print("URL is: ", response.url)
        print("User agent is : ", response.request.headers['User-Agent'])
        filename = '/root/Public/company_profiler/crawled_page/%s.html' % response.url
        article = Extractor(extractor='LargestContentExtractor', html=response.body).getText()
        print("Article is :", article)
        if len(article.split("\n")) < 5:
            print("Skipping to next url : ", article.split("\n"))
        else:
            print("Continue parsing: ", article.split("\n"))
            ContentHandler_copy.ContentHandler_copy.start(article, response.url)

settings.py

DOWNLOADER_MIDDLEWARES = {
    'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
    'random_useragent.RandomUserAgentMiddleware': 320
}

我通过脚本RunSpider.py

调用它来运行蜘蛛

RunSpider.py

from CoreSpider import CoreSpider
from scrapy.crawler import  CrawlerProcess
from scrapy.utils.project import get_project_settings

process = CrawlerProcess(get_project_settings())
process.crawl(CoreSpider)
process.start()

更新 CoreSpider.py

class CoreSpider(scrapy.Spider):
    name = "final"
    def __init__(self):
        self.start_urls = self.read_url()
        self.rules = (
            Rule(LinkExtractor(unique=True), callback='parse', follow=True, process_request='process_request'),
        )

    def process_request(self, request, spider):
        print("Request is : ", request) ### Not printing anything
        if 'xxx' in request.url:  # <-- set proxy for this URL?
            meta = request.get('meta', {})
            meta.update({'proxy': 'https://159.8.18.178:8080'})
            return request.replace(meta=meta)
        return request
        .......

我也尝试在process_request方法中设置这样的代理,但失败了。

request.meta['proxy'] = "https://159.8.18.178:8080"

提前致谢。

3 个答案:

答案 0 :(得分:1)

要按请求使用代理,请按documentation指定proxy的{​​{1}} Request属性。如果是meta,您需要向Rule提供CrawlSpider参数。在该方法中,根据请求网址有选择地应用上述内容(即设置process_request),并返回已填写meta['proxy']的已修改请求。

修改 替换规则定义

meta

self.rules = (
    Rule(LinkExtractor(unique=True), callback='parse', follow=True),
)

并在self.rules = ( Rule(LinkExtractor(unique=True), callback='parse', follow=True, process_request='process_request'), ) 类中定义新方法process_request

CoreSpider

<强> EDIT2: 我认为问题可能是由于def process_request(self, request): if 'xxx' in request.url: # <-- set proxy for this URL? meta = request.get('meta', {}) meta.update({'proxy': 'your_proxy'}) return request.replace(meta=meta) return request start_urls定义隐藏在构造函数中引起的:

rules

正确的方法是将这些属性作为 class 属性,即

...
def __init__(self):
    self.start_urls = self.read_url()
    self.rules = (
        Rule(LinkExtractor(unique=True), callback='parse', follow=True, process_request='process_request'),
    )
...

对于class CoreSpider(scrapy.Spider): name = "final" start_urls = self.read_url() rules = ( Rule(LinkExtractor(unique=True), callback='parse', follow=True, process_request='process_request'), ) ,如果您需要更复杂的内容(例如,从外部文件中读取URL),定义start_requests以产生start_urls可能更好,更具可读性第

答案 1 :(得分:1)

一个独立的方法。没有中间件。

urls = [url, url, ..., url]
class TestSpider(scrapy.Spider):
    name = 'test'
    allowed_domains = ['test.com']
    # start_urls = urls  # invalid, override by start_requests

    def start_requests(self):
        for url in urls:
            # handle each individual url with or without proxy
            # if url in ['no1.com', 'no2.com', 'no3.com']:
            if url == 'www.no_proxy.com':
                meta_proxy = '' # do not use proxy for this url
            else:
                meta_proxy = "http://127.0.0.1:8888"
            yield scrapy.Request(url=url, callback=self.parse, meta={'proxy': meta_proxy})

    def parse(self, response):
        title = response.xpath('.//title/text()').extract_first()
        yield {'title': title}

用法:

$scrapy runspider test.py -o test.json -s CONCURRENT_REQUESTS_PER_DOMAIN=100 -s CONCURRENT_REQUESTS=100

免责声明:

我不知道它是否会降低抓取速度,因为它是逐个迭代网址的。我目前没有大量的测试网站。希望使用此代码的人会发表评论,看看他们得到了什么。

答案 2 :(得分:0)

我举一个使用具有特定网址的代理

的示例
link = 'https://www.example.com/'
request = Request(link, callback=self.parse_url)
request.meta['proxy'] = "http://PROXYIP:PROXYPORT"
yield request