Scrapy:如何获取用于项目中每个请求的代理

时间:2019-04-11 08:58:05

标签: python python-3.x scrapy

我正在使用DOWNLOADER_MIDDLEWARES来旋转带有scrapy.Spider的代理,我想为每个请求使用的代理获得一个项,即item['proxy_used']

我想可以通过“ Stats Collector”获得代理,但是我是Python和Scrapy的新手,到目前为止,我还没有找到解决方案。

非常感谢您的帮助

import scrapy
from tutorial.items import QuotesItem

class QuotesSpider(scrapy.Spider):
    name = "quotes"
    allowed_domains = ["quotes.toscrape.com"]
    start_urls = [
        'http://quotes.toscrape.com/',
    ]

    def parse_quotes(self, response):
        for sel in response.css('div.quote'):
            item = QuotesItem()
            item['text'] = sel.css('span.text::text').get()
            item['author'] = sel.css('small.author::text').get()
            item['tags'] = sel.css('div.tags a.tag::text').getall()
            item['quotelink'] = sel.css('small.author ~ a[href*="goodreads.com"]::attr(href)').get()

            item['proxy_used'] = ??? <-- PROXY USED BY REQUEST - "HOW TO???"
            yield item 

     # follow pagination links @shortcut

        for a in response.css('li.next a'):
            yield response.follow(a, callback = self.parse_quotes)

1 个答案:

答案 0 :(得分:1)

您可以使用响应对象来访问所使用的代理。像下面一样

response.meta.get("proxy")

代码中也进行了更新。

import scrapy
from tutorial.items import QuotesItem

class QuotesSpider(scrapy.Spider):
    name = "quotes"
    allowed_domains = ["quotes.toscrape.com"]
    start_urls = [
        'http://quotes.toscrape.com/',
    ]

    def parse_quotes(self, response):
        for sel in response.css('div.quote'):
            item = QuotesItem()
            item['text'] = sel.css('span.text::text').get()
            item['author'] = sel.css('small.author::text').get()
            item['tags'] = sel.css('div.tags a.tag::text').getall()
            item['quotelink'] = sel.css('small.author ~ a[href*="goodreads.com"]::attr(href)').get()

            item['proxy_used'] = response.meta.get("proxy")
            yield item 

     # follow pagination links @shortcut

        for a in response.css('li.next a'):
            yield response.follow(a, callback = self.parse_quotes)