动态设置scrapy请求回调

时间:2016-09-19 16:17:01

标签: python scrapy

我正在使用scrapy。我想基于每个请求轮换代理并从我拥有的api获取代理返回单个代理。我的计划是向api发出请求,获取代理,然后根据以下内容使用它来设置代理:

http://stackoverflow.com/questions/39430454/making-request-to-api-from-within-scrapy-function

我有以下内容:

class ContactSpider(Spider):
    name = "contact"

def parse(self, response):

....
        PR = Request(
        'my_api'
        headers=self.headers,
        meta={'newrequest': Request(url_to_scrape,  headers=self.headers),},
        callback=self.parse_PR
    )
    yield PR


def parse_PR(self, response):
    newrequest = response.meta['newrequest']
    proxy_data = response.body
    newrequest.meta['proxy'] = 'http://'+proxy_data
    newrequest.replace(url = 'http://ipinfo.io/ip') #TESTING
    newrequest.replace(callback= self.form_output) #TESTING

    yield newrequest

def form_output(self, response):
    open_in_browser(response)

但我得到了:

    Traceback (most recent call last):
  File "C:\twisted\internet\defer.py", line 1126, in _inlineCallbacks
    result = result.throwExceptionIntoGenerator(g)
  File "C:\twisted\python\failure.py", line 389, in throwExceptionIntoGenerator
    return g.throw(self.type, self.value, self.tb)
  File "C:\scrapy\core\downloader\middleware.py", line 43, in process_request
    defer.returnValue((yield download_func(request=request,spider=spider)))
  File "C:\scrapy\utils\defer.py", line 45, in mustbe_deferred
    result = f(*args, **kw)
  File "C:\scrapy\core\downloader\handlers\__init__.py", line 65, in download_request
    return handler.download_request(request, spider)
  File "C:\scrapy\core\downloader\handlers\http11.py", line 60, in download_request
    return agent.download_request(request)
  File "C:\scrapy\core\downloader\handlers\http11.py", line 255, in download_request
    agent = self._get_agent(request, timeout)
  File "C:\scrapy\core\downloader\handlers\http11.py", line 235, in _get_agent
    _, _, proxyHost, proxyPort, proxyParams = _parse(proxy)
  File "C:\scrapy\core\downloader\webclient.py", line 37, in _parse
    return _parsed_url_args(parsed)
  File "C:\scrapy\core\downloader\webclient.py", line 20, in _parsed_url_args
    host = b(parsed.hostname)
  File "C:\scrapy\core\downloader\webclient.py", line 17, in <lambda>
    b = lambda s: to_bytes(s, encoding='ascii')
  File "C:\scrapy\utils\python.py", line 117, in to_bytes
    'object, got %s' % type(text).__name__)
TypeError: to_bytes must receive a unicode, str or bytes object, got NoneType

我做错了什么?

1 个答案:

答案 0 :(得分:1)

堆栈跟踪信息表明Scrapy遇到了urlNone的请求对象,该对象应为字符串类型。

代码中的这两行:

newrequest.replace(url = 'http://ipinfo.io/ip') #TESTING
newrequest.replace(callback= self.form_output) #TESTING

无法按预期工作,因为方法Request.replace 会返回一个新实例而不是就地修改原始请求。

你需要这样的东西:

newrequest = newrequest.replace(url = 'http://ipinfo.io/ip') #TESTING
newrequest = newrequest.replace(callback= self.form_output) #TESTING

或简单地说:

newrequest = newrequest.replace(
    url='http://ipinfo.io/ip',
    callback=self.form_output
)