Scrapy如何处理Javascript

时间:2015-08-21 13:46:16

标签: javascript selenium web-scraping scrapy scrapy-spider

蜘蛛参考:

import scrapy
from scrapy.spiders import Spider
from scrapy.selector import Selector
from script.items import ScriptItem



    class RunSpider(scrapy.Spider):
        name = "run"
        allowed_domains = ["stopitrightnow.com"]
        start_urls = (
            'http://www.stopitrightnow.com/',
        )



        def parse(self, response):


            for widget in response.xpath('//div[@class="shopthepost-widget"]'):
                #print widget.extract()
                item = ScriptItem()
                item['url'] = widget.xpath('.//a/@href').extract()
                url = item['url']
                #print url
                yield item

当我运行它时,终端输出如下:

2015-08-21 14:23:51 [scrapy] DEBUG: Scraped from <200 http://www.stopitrightnow.com/>
{'url': []}
<div class="shopthepost-widget" data-widget-id="708473">
<script type="text/javascript">!function(d,s,id){var e, p = /^http:/.test(d.location) ? 'http' : 'https';if(!d.getElementById(id)) {e = d.createElement(s);e.id = id;e.src = p + '://' + 'widgets.rewardstyle.com' + '/js/shopthepost.js';d.body.appendChild(e);}if(typeof window.__stp === 'object') if(d.readyState === 'complete') {window.__stp.init();}}(document, 'script', 'shopthepost-script');</script><br>

这是html:

<div class="shopthepost-widget" data-widget-id="708473" data-widget-uid="1"><div id="stp-55d44feabd0eb" class="stp-outer stp-no-controls">
    <a class="stp-control stp-left stp-hidden">&lt;</a>
    <div class="stp-inner" style="width: auto">
        <div class="stp-slide" style="left: -0%">
                        <a href="http://rstyle.me/iA-n/zzhv34c_" target="_blank" rel="nofollow" class="stp-product " data-index="0" style="margin: 0 0px 0 0px">
                <span class="stp-help"></span>
                <img src="//images.rewardstyle.com/img?v=2.13&amp;p=n_24878713">
                            </a>
                        <a href="http://rstyle.me/iA-n/zzhvw4c_" target="_blank" rel="nofollow" class="stp-product " data-index="1" style="margin: 0 0px 0 0px">
                <span class="stp-help"></span>
                <img src="//images.rewardstyle.com/img?v=2.13&amp;p=n_24878708">

对我而言,尝试激活Javascript时似乎遇到了障碍。我知道javascript无法在scrapy中运行,但必须有一种方法来获取这些链接。我看过硒但是无法掌握它。

欢迎任何欢迎。

2 个答案:

答案 0 :(得分:5)

我用ScrapyJS解决了这个问题。

按照官方文档中的设置说明和this answer

这是我使用的测试蜘蛛:

# -*- coding: utf-8 -*-
import scrapy


class TestSpider(scrapy.Spider):
    name = "run"
    allowed_domains = ["stopitrightnow.com"]
    start_urls = (
        'http://www.stopitrightnow.com/',
    )

    def start_requests(self):
        for url in self.start_urls:
            yield scrapy.Request(url, meta={
                'splash': {
                    'endpoint': 'render.html',
                    'args': {'wait': 0.5}
                }
            })

    def parse(self, response):
        for widget in response.xpath('//div[@class="shopthepost-widget"]'):
            print widget.xpath('.//a/@href').extract()

以下是我在控制台上的内容:

[u'http://rstyle.me/iA-n/7bk8r4c_', u'http://rstyle.me/iA-n/7bk754c_', u'http://rstyle.me/iA-n/6th5d4c_', u'http://rstyle.me/iA-n/7bm3s4c_', u'http://rstyle.me/iA-n/2xeat4c_', u'http://rstyle.me/iA-n/7bi7f4c_', u'http://rstyle.me/iA-n/66abw4c_', u'http://rstyle.me/iA-n/7bm4j4c_']
[u'http://rstyle.me/iA-n/zzhv34c_', u'http://rstyle.me/iA-n/zzhvw4c_', u'http://rstyle.me/iA-n/zwuvk4c_', u'http://rstyle.me/iA-n/zzhvr4c_', u'http://rstyle.me/iA-n/zzh9g4c_', u'http://rstyle.me/iA-n/zzhz54c_', u'http://rstyle.me/iA-n/zwuuy4c_', u'http://rstyle.me/iA-n/zzhx94c_']

答案 1 :(得分:5)

Alecxe的非javascript替代方法是检查页面手动加载内容的位置,并在功能上添加(see this SO question for more details)

在这种情况下,我们得到以下内容: Network traffic

因此,对于<div class="shopthepost-widget" data-widget-id="708473">,执行Javascript以嵌入网址“widgets.rewardstyle.com/stps/708473.html”。

您可以自己手动生成对这些网址的请求来自行处理:

def parse(self, response):
    for widget in response.xpath('//div[@class="shopthepost-widget"]'):
        widget_id = widget.xpath('@data-widget-id').extract()[0]
        widget_url = "http://widgets.rewardstyle.com/stps/{id}.html".format(id=widget_id)
        yield Request(widget_url, callback=self.parse_widget)

def parse_widget(self, response):
    for link in response.xpath('//a[contains(@class, "stp-product")]'):
        item = JavasItem()  # Name provided by author, see comments below
        item['link'] = links.xpath("@href").extract()
        yield item

    # Do whatever else you want with the opened page.

如果您需要将这些小部件与其所属的帖子/文章相关联,请通过meta将该信息传递到请求中。

编辑: parse_widget()已更新。它使用contains来计算类,因为它在末尾有一个空格。您也可以使用CSS选择器,但它确实是您的呼叫。