scrapy-splash response.body不包含html

时间:2019-03-17 15:04:01

标签: scrapy scrapy-splash crawlera

我正在尝试在初始本地实例旁边使用crawlera,这是我的lua脚本

function main(splash)
function use_crawlera(splash)

    local user = splash.args.crawlera_user

    local host = 'proxy.crawlera.com'
    local port = 8010
    local session_header = 'X-Crawlera-Session'
    local session_id = 'create'

    splash:on_request(function(request)
        request:set_header('X-Crawlera-Cookies', 'disable')
        request:set_header(session_header, session_id)
        request:set_proxy { host, port, username = user, password = '' }
    end)

    splash:on_response_headers(function(response)
        if type(response.headers[session_header]) ~= nil then
            session_id = response.headers[session_header]
        end
    end)
end

function main(splash)
    use_crawlera(splash)
    splash:go(splash.args.url)
    splash:wait(30)
    return splash:html()
end

结束

这是我的start_request

yield SplashRequest(index_url,
                            self.parse_kawanlama_index,
                            endpoint='execute',
                            args={
                                'lua_source': lua_script,
                                'wait' : 5,
                                'html' : 1,
                                'url': index_url,
                                'timeout': 10,
                                'crawlera_user':self.crawlera_apikey
                            },
                            # tell Splash to cache the lua script, to avoid sending it for every request
                            cache_args=['lua_source'],
                            )

但它似乎不起作用,因为我在self.parse(response)中获得的response.body不包含html。

0 个答案:

没有答案