scrapy,splash,lua,按钮点击

时间:2017-11-05 10:12:50

标签: python lua scrapy scrapy-splash

我是这里所有乐器的新手。我的目标是从许多网页中提取所有网址,这些网页通过" Weiter" /" next"按钮 - 适用于多个URL。我决定尝试用scrapy。页面是动态生成的。然后我了解到我需要另外一台仪器并为此安装了scrapy。安装工作正常。我根据教程设置了安装。然后我设法通过发送"返回"来获得第一页。在search-input-field中。使用浏览器,我可以得到我需要的结果。我的问题是我尝试点击" next"生成的页面上的按钮,并不知道具体如何。正如我在几页上看到的那样,这并不容易。我尝试了建议的解决方案没有成功。我想我不是太远了,我会感激一些帮助。谢谢。

我的settings.py

BOT_NAME = 'gr'
SPIDER_MODULES = ['gr.spiders']
NEWSPIDER_MODULE = 'gr.spiders'
ROBOTSTXT_OBEY = True
DOWNLOADER_MIDDLEWARES = {
    'scrapy_splash.SplashCookiesMiddleware': 723,
    'scrapy_splash.SplashMiddleware': 725,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}
SPLASH_URL = 'http://localhost:8050'
SPIDER_MIDDLEWARES = {
    'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'

我的蜘蛛:

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy_splash import SplashRequest
import json
# import base64

class GrSpider(scrapy.Spider):
    name = 'gr_'
    allowed_domains = ['lawsearch.gr.ch']
    start_urls = ['http://www.lawsearch.gr.ch/le/']

    def start_requests(self):

        script = """
        function main(splash)
            assert(splash:go(splash.args.url))
            splash:set_viewport_full()
            splash:wait(0.3)
            splash:send_keys("<Return>")
            splash:wait(0.3)
            return splash:html()
        end
        """

        for url in self.start_urls:

             yield SplashRequest(url=url,
                    callback=self.parse,
                    endpoint='execute',
                    args={'lua_source': script})

    def parse(self, response):

        script3 = """
            function main(splash)
            splash:autoload{url="https://code.jquery.com/jquery-3.2.1.min.js"}
            assert(splash:go(splash.args.url))
            splash:set_viewport_full()

--            splash:wait(2.8)
--            local element = splash:select('.result-pager-next-active .simplebutton')
--            element:mouse_click()

--            local bounds = element:bounds()
--            assert(element:mouse_click{x=bounds.width, y=bounds.height})

--            naechster VERSCUH
-- https://blog.scrapinghub.com/2015/03/02/handling-javascript-in-scrapy-with-splash/

--  https://stackoverflow.com/questions/35720323/scrapyjs-splash-click-controller-button
--  assert(splash:runjs("$('#date-controller > a:first-child').click()"))

--  https://github.com/scrapy-plugins/scrapy-splash/issues/27
--            assert(splash:runjs("$('#result-pager-next-active .simplebutton').click()"))



-- https://developer.mozilla.org/de/docs/Web/API/Document/querySelectorAll
-- ANSCHAUEN
-- https://stackoverflow.com/questions/38043672/splash-lua-script-to-do-multiple-clicks-and-visits

-- elementList = baseElement.querySelectorAll(selectors)
-- var domRect = element.getBoundingClientRect();
-- var rect = obj.getBoundingClientRect();
-- https://stackoverflow.com/questions/34001917/queryselectorall-with-multiple-conditions

            local get_dimensions = splash:jsfunc([[
               function () {
                 var doc1 = document.querySelectorAll("result-pager-next-active.simplebutton")[0];
                 var el = doc1.documentElement;
                 var rect = el.getClientRects()[0];
                 return {'x': rect.left, 'y': rect.top}
               }
            ]])
--            splash:set_viewport_full()
            splash:wait(0.1)
            local dimensions = get_dimensions()
            splash:mouse_click(dimensions.x, dimensions.y)


--            splash:runjs("document.querySelectorAll('result-pager-next-active ,simplebutton')[1].click()")
--            assert(splash:runjs("$('.result-pager-next-active .simplebutton')[1].click()"))
--            assert(splash:runjs("$('.simplebutton')[12].click()"))

            splash:wait(1.6)
            return splash:html()
        end
        """

        for teil in response.xpath('//div/div/div/div/a'):
            yield {
                'link': teil.xpath('./@href').extract()
            }

        next_page = response.xpath('//div[@class="v-label v-widget simplebutton v-label-simplebutton v-label-undef-w"]').extract_first()

#        print response.body

        print '----------------------'        
#        print  response.xpath('//div[@class="v-slot v-slot-simplebutton"]/div[contains(text(), "Weiter")]').extract_first()
#        print  response.xpath('//div[@class="v-slot v-slot-simplebutton"]/div[contains(text(), "Weiter")]').extract()

#       class="v-slot v-slot-simplebutton"
#        nextPage = HtmlXPathSelector(response).select("//div[@class='paginationControl']/a[contains(text(),'Link Text Next')]/@href")

#        neue_seite=response.url

#        print response.url

        if next_page is not None: 
#            yield SplashRequest(url=neue_seite,

            yield SplashRequest(response.url,
                    callback=self.parse,
                    dont_filter=True,
                    endpoint='execute',
                    args={'lua_source': script3})

1 个答案:

答案 0 :(得分:0)

您不必总是使用Splash。如果下一个按钮是链接,则可以获取href属性并将请求发送回解析函数。