点击使用Selenium和scrapy spider的按钮

时间:2018-05-22 14:06:04

标签: python selenium selenium-webdriver scrapy

我刚刚开始,我已经在这一两个星期了。只是使用互联网来帮助,但现在我达到了我无法理解的地步,或者我的问题在其他任何地方都找不到。如果你不理解我的程序我想要抓取数据然后点击一个按钮然后刮取数据,直到我刮掉已经收集的数据。然后转到列表中的下一页。 我达到了我刮掉前8个数据的程度,但是我找不到点击“看到更多!”的方法。按钮。我知道我应该使用Selenium和按钮的Xpath。无论如何,这是我的代码:

class KickstarterSpider(scrapy.Spider):
name = 'kickstarter'
allowed_domains = ['kickstarter.com']
start_urls = ["https://www.kickstarter.com/projects/zwim/zwim-smart-swimming-goggles/community", "https://www.kickstarter.com/projects/zunik/oriboard-the-amazing-origami-multifunctional-cutti/community"]

def _init_(self, driver):
    self.driver = webdriver.Chrome(chromedriver)

def parse(self, response):
    self.driver.get('https://www.kickstarter.com/projects/zwim/zwim-smart-swimming-goggles/community')
    backers = response.css('.founding-backer.community-block-content')
    b = backers[0]

    while True:
        try:
            seemore = selfdriver.find_element_by_xpath('//*[@id="content-wrap"]').click()
        except:
            break
    self.driver.close()

def parse2(self,response):
    print('you are here!')


    for b in backers:
        name = b.css('.name.js-founding-backer-name::text').extract_first()
        backed = b.css('.backing-count.js-founding-backer-backings::text').extract_first()
        print(name, backed)

1 个答案:

答案 0 :(得分:0)

在scrapy加载中使用shure web驱动程序并解释JS(idk ......它可以是一个解决方案)