使用scrapyjs通过启动抓取onclick页面

时间:2016-01-28 04:40:24

标签: python scrapy splash scrapyjs

我正在尝试从使用javascript的网页获取网址

<span onclick="go1()">click here </span>
<script>function go1(){
        window.location = "../innerpages/" + myname + ".php";
    }
</script>

这是我使用带有splash的scrapyjs的代码

def start_requests(self):
    for url in self.start_urls:
        yield Request(url, self.parse, meta={
            'splash': {
                'endpoint': 'render.html',
                'args': {'wait': 4, 'html': 1, 'png': 1, 'render_all': 1, 'js_source': 'document.getElementsByTagName("span")[0].click()'},
            }
        })

如果我写

'js_source': 'document.title="hello world"'

它会起作用

似乎我可以处理页面内的文字,但我无法从go1()获取网址

如果我想在go1()

中获取网址,该怎么办?

谢谢!

1 个答案:

答案 0 :(得分:4)

您可以使用/execute endpoint

class MySpider(scrapy.Spider):
    ...

    def start_requests(self):
        script = """
        function main(splash)
            local url = splash.args.url
            assert(splash:go(url))
            assert(splash:wait(1))

            assert(splash:runjs('document.getElementsByTagName("span")[0].click()'))
            assert(splash:wait(1))

            -- return result as a JSON object
            return {
                html = splash:html()
            }
        end
        """
        for url in self.start_urls:
            yield scrapy.Request(url, self.parse_result, meta={
                'splash': {
                    'args': {'lua_source': script},
                    'endpoint': 'execute',
                }
            })

    def parse_result(self, response):

        # fetch base URL because response url is the Splash endpoint
        baseurl = response.meta["_splash_processed"]["args"]["url"]

        # decode JSON response
        splash_json = json.loads(response.body_as_unicode())

        # and build a new selector from the response "html" key from that object
        selector = scrapy.Selector(text=splash_json["html"], type="html")

        ...