动态组装scrapy GET请求字符串

时间:2016-06-04 17:15:23

标签: python scrapy

我一直在使用firebug,我已经有以下词典来查询api。

url = "htp://my_url.aspx#top"

querystring = {"dbkey":"x1","stype":"id","s":"27"}

headers = {
    'accept': "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
    'upgrade-insecure-requests': "1",
    'user-agent': "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.125 
    }

使用python请求,使用它就像:

一样简单
import requests
response = requests.request("GET", url, headers=headers, params=querystring)
print(response.text)

我如何在Scrapy中使用它们?我一直在阅读http://doc.scrapy.org/en/latest/topics/request-response.html,我知道以下内容适用于帖子:

        r = Request(my_url, method="post",  headers= headers, body=payload, callback=self.parse_method)

我试过了:

    r = Request("GET", url, headers=headers, body=querystring, callback=self.parse_third_request)

我得到了:

r = Request("GET", url, headers=headers, body=querystring, callback=self.parse_third_request)
TypeError: __init__() got multiple values for keyword argument 'callback'

编辑:

更改为:

    r = Request(method="GET", url=url, headers=headers, body=querystring, callback=self.parse_third_request)

现在得到:

  File "C:\envs\r2\tutorial\tutorial\spiders\parker_spider.py", line 90, in parse_second_request
    r = Request(method="GET", url=url, headers=headers, body=querystring, callback=self.parse_third_request)
  File "C:\envs\virtalenvs\teat\lib\site-packages\scrapy\http\request\__init__.py", line 26, in __init__
    self._set_body(body)
  File "C:\envs\virtalenvs\teat\lib\site-packages\scrapy\http\request\__init__.py", line 68, in _set_body
    self._body = to_bytes(body, self.encoding)
  File "C:\envs\virtalenvs\teat\lib\site-packages\scrapy\utils\python.py", line 117, in to_bytes
    'object, got %s' % type(text).__name__)
TypeError: to_bytes must receive a unicode, str or bytes object, got dict

编辑2:

我现在有:

    yield Request(method="GET", url=url, headers=headers, body=urllib.urlencode(querystring), callback=self.parse_third_request)

def parse_third_request(self, response):
    from scrapy.shell import inspect_response
    inspect_response(response, self)
    print("hi")
    return None

我做的时候没有错误,但在shell中#34; response.url"我只获得没有获取参数的基本网址。

1 个答案:

答案 0 :(得分:3)

查看Request初始化方法的签名:

class scrapy.http.Request(url[, callback, method='GET', headers, body, cookies, meta, encoding='utf-8', priority=0, dont_filter=False, errback])
您的案例中的

GET字符串用作callback参数的位置值。

使用关键字参数代替method(尽管GET是默认值):

r = Request(url, method="GET", headers=headers, body=querystring, callback=self.parse_third_request)