在Scrapy Splash中使用Crawlera lua脚本时如何获取session_id?

时间:2018-11-27 15:13:00

标签: python lua scrapy scrapy-splash crawlera

如您所知,当我们尝试将Scrapy Splash与Crawlera结合使用时,我们会使用以下lua脚本:

function use_crawlera(splash)
    -- Make sure you pass your Crawlera API key in the 'crawlera_user' arg.
    -- Have a look at the file spiders/quotes-js.py to see how to do it.
    -- Find your Crawlera credentials in https://app.scrapinghub.com/
    local user = splash.args.crawlera_user

    local host = 'proxy.crawlera.com'
    local port = 8010
    local session_header = 'X-Crawlera-Session'
    local session_id = 'create'

    splash:on_request(function (request)
        request:set_header('X-Crawlera-Cookies', 'disable')
        request:set_header(session_header, session_id)
        request:set_proxy{host, port, username=user, password=''}
    end)

    splash:on_response_headers(function (response)
        if type(response.headers[session_header]) ~= nil then
            session_id = response.headers[session_header]
        end
    end)
end

function main(splash)
    use_crawlera(splash)
        splash:init_cookies(splash.args.cookies)
        assert(splash:go{
            splash.args.url,
            headers=splash.args.headers,
            http_method=splash.args.http_method,
        })    
            assert(splash:wait(3))
        return {
            html = splash:html(),
            cookies = splash:get_cookies(),
        }
end

我非常需要在lua脚本中有一个session_id变量,但是如何从Scrapy的响应中访问它呢?

我尝试过response.session_idresponse.headers['X-Crawlera-Session'],但都无法正常工作。

2 个答案:

答案 0 :(得分:0)

答案 1 :(得分:0)

  1. 还在lua脚本中返回HAR数据(https://splash.readthedocs.io/en/stable/scripting-ref.html#splash-har):
    return {
        html = splash:html(),
        har = splash:har(),
        cookies = splash:get_cookies(),
    }
  1. 假设您使用的是scrapy-splash(https://github.com/scrapy-plugins/scrapy-splash),请确保将execute端点设置为您的请求:

meta['splash']['endpoint'] = 'execute'

如果使用scrapy.Request,则render.json是默认端点,而对于scrapy_splash.SplashRequest,默认端点是render.html。查看以下两个示例,以了解如何设置端点:https://github.com/scrapy-plugins/scrapy-splash#requests

  1. 仅现在您可以在解析方法中访问X-Crawlera-Session标头:
    def parse(self, response):
        headers = json.loads(response.text)['har']['log']['entries'][0]['response']['headers']
        session_id = next(x for x in headers if x['name'] == 'X-Crawlera-Session')['value']
>>> headers = json.loads(response.text)['har']['log']['entries'][0]['response']['headers']
>>> next(x for x in headers if x['name'] == 'X-Crawlera-Session')
{u'name': u'X-Crawlera-Session', u'value': u'2124641382'}