Scrapy Splash Spider不遵循链接来获取新页面

时间:2019-02-25 13:48:16

标签: python scrapy scrapy-splash

我正在从使用Javascript链接到新页面的页面中获取数据。我正在使用Scrapy + Splash来获取此数据,但是由于某些原因,未遵循链接。

这是我的蜘蛛的代码:

import scrapy
from   scrapy_splash import SplashRequest       

script = """
    function main(splash, args)
        local javascript = args.javascript
        assert(splash:runjs(javascript))
        splash:wait(0.5)

        return {
               html = splash:html()
        }
    end
"""


page_url = "https://www.londonstockexchange.com/exchange/prices-and-markets/stocks/exchange-insight/trade-data.html?page=0&pageOffBook=0&fourWayKey=GB00B6774699GBGBXAMSM&formName=frmRow&upToRow=-1"


class MySpider(scrapy.Spider):
    name = "foo_crawler"          
    download_delay = 5.0

    custom_settings = {
                'DOWNLOADER_MIDDLEWARES' : {
                            'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
                            'scrapy_fake_useragent.middleware.RandomUserAgentMiddleware': 400,
                            'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
                            },
                 #'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter'
                }




    def start_requests(self):
        yield SplashRequest(url=page_url, 
                                callback=self.parse
                            )



    # Parses first page of ticker, and processes all maturities
    def parse(self, response):
        try:
            self.extract_data_from_page(response)

            href = response.xpath('//div[@class="paging"]/p/a[contains(text(),"Next")]/@href')
            print("href: {0}".format(href))

            if href:
                javascript = href.extract_first().split(':')[1].strip()

                yield SplashRequest(response.url, self.parse, 
                                    cookies={'store_language':'en'},
                                    endpoint='execute', 
                                    args = {'lua_source': script, 'javascript': javascript })

        except Exception as err:
            print("The following error occured: {0}".format(err))



    def extract_data_from_page(self, response):
        url = response.url
        page_num = url.split('page=')[1].split('&')[0]
        print("extract_data_from_page() called on page: {0}.".format(url))
        filename = "page_{0}.html".format(page_num)
        with open(filename, 'w') as f:
            f.write(response.text)




    def handle_error(self, failure):
        print("Error: {0}".format(failure))

仅获取第一页,而无法通过单击页面底部的链接来“单击”以获取后续页面。

如何解决此问题,以便可以单击页面底部给出的页面?

2 个答案:

答案 0 :(得分:1)

您的代码看起来不错,唯一的事情是,由于产生的请求具有相同的url,因此重复过滤器将忽略它们。只需取消注释DUPEFILTER_CLASS,然后重试。

custom_settings = {
    ...
    'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter',
}

编辑:在不运行javascript的情况下浏览数据页,您可以这样做:

page_url = "https://www.londonstockexchange.com/exchange/prices-and-markets/stocks/exchange-insight/trade-data.html?page=%s&pageOffBook=0&fourWayKey=GB00B6774699GBGBXAMSM&formName=frmRow&upToRow=-1"

page_number_regex = re.compile(r"'frmRow',(\d+),")
...
def start_requests(self):
    yield SplashRequest(url=page_url % 0,
                        callback=self.parse)
...
if href:
    javascript = href.extract_first().split(':')[1].strip()
    matched = re.search(self.page_number_regex, javascript)
    if matched:
        yield SplashRequest(page_url % matched.group(1), self.parse,
                            cookies={'store_language': 'en'},
                            endpoint='execute',
                            args={'lua_source': script, 'javascript': javascript})

不过,我期待使用javascript的解决方案。

答案 1 :(得分:1)

您可以使用page查询字符串变量。它从0开始,所以第一页是page=0。您可以通过查看以下内容查看总页面:

<div class="paging">
  <p class="floatsx">&nbsp;Page 1 of 157 </p>
</div>

这样,您就可以呼叫0-156页。