如何从刮擦飞溅的响应中获取Cookie

时间:2019-08-17 06:26:43

标签: scrapy scrapy-splash

我想从启动响应中获取cookie值。但它没有按我预期的那样工作。

这是蜘蛛代码

class AmazonSpider(scrapy.Spider):
    name = 'amazon'
    allowed_domains = ['amazon.com']

    def start_requests(self):
        url = 'https://www.amazon.com/gp/goldbox?ref_=nav_topnav_deals'
        yield SplashRequest(url, self.parse, args={'wait': 0.5})

    def parse(self, response):
        print(response.headers)

输出日志:

2019-08-17 11:53:07 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.amazon.com/robots.txt> (referer: None)
2019-08-17 11:53:08 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://192.168.99.100:8050/robots.txt> (referer: None)
2019-08-17 11:53:24 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.amazon.com/gp/goldbox?ref_=nav_topnav_deals via http://192.168.99.100:8050/render.html> (referer: None)
{b'Date': [b'Sat, 17 Aug 2019 06:23:09 GMT'], b'Server': [b'TwistedWeb/18.9.0'], b'Content-Type': [b'text/html; charset=utf-8']}
2019-08-17 11:53:24 [scrapy.core.engine] INFO: Closing spider (finished)

1 个答案:

答案 0 :(得分:2)

您可以尝试以下方法: -编写一个小的Lua脚本,返回html + cookie:

lua_request = """
    function main(splash)
        splash:init_cookies(splash.args.cookies)
        assert(splash:go(splash.args.url))
        splash:wait(0.5)
        return {
            html = splash:html(),
            cookies = splash:get_cookies()
        }
    end
    """

将您的请求更改为以下内容:

yield SplashRequest(
    url,
    self.parse,
    endpoint='execute',
    args={'lua_source': self.lua_request}
)

然后按如下所示在解析方法中找到cookie:

def parse(self, response):
    cookies = response.data['cookies']
    headers = response.headers