单击“加载更多”并且url保持不变后,如何抓取生成的数据?

时间:2019-06-13 19:56:32

标签: python selenium web-scraping scrapy

我正在尝试抓取某个类别或整个网站https://www.classcentral.com/subject下的所有课程。但是,该网站一次仅显示55个课程(包括广告),您必须单击“加载更多”按钮,该按钮会生成50个课程我使用selenium来单击``加载更多''按钮,然后自己调用parse_subject函数以生成加载的课程的数据点。但是刮板会无限期地仅刮擦前55个课程。让刮板刮擦下50套课程,而不必一次又一次地刮擦第一套课程,并继续这样做直到没有更多的课程?请帮助

这是“加载接下来的50个课程(总计)”的代码

<button id="show-more-courses" class="btn-blue-outline width-14-16 medium- 
up-width-1-2 btn--large margin-top-medium text-center" data-page="2" 
style="" data-track-click="listing_click" data-track-props="{ 
"type": "Load More Courses", "page": "2" }}">
   <span class="small-up-hidden text--bold">Load more</span>
   <span class="hidden small-up-inline-block text--bold">
              Load the next 50 courses of 1127
          </span>
</button>

这是我的代码

import scrapy
from scrapy.http import Request
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
class SubjectsSpider(scrapy.Spider):
    name = 'subjects'
    allowed_domains = ['class-central.com']
    start_urls = ['http://class-central.com/subjects']

    def __init__(self,subject=None):
        self.subject=subject

    def parse(self, response):
        if self.subject:
            print("True")
            subject_url=response.xpath('//*[contains(@title, "'+  self.subject + '")]/@href').extract_first()
            yield Request(response.urljoin(subject_url),callback=self.parse_subject,dont_filter=True)
        else:
            self.logger.info('Scraping all subjects')
            subjects=response.xpath('//*[@class="unit-block unit-fill"]/a/@href').extract()
            for subject in subjects:
                self.logger.info(subject)
                yield Request(response.urljoin(subject), callback=self.parse_subject,dont_filter=True)


    def parse_subject(self,response):
        subject_name=response.xpath('//title/text()').extract_first()
        subject_name=subject_name.split(' | ') [0]
        courses = response.xpath('//*[@itemtype="http://schema.org/Event"]')
        for course in courses:
            course_name = course.xpath('.//*[@itemprop="name"]/text()').extract_first()
            course_url = course.xpath('.//*[@itemprop="url"]/@href').extract_first()
            absolute_course_url = response.urljoin(course_url)

            yield{
            'subject_name':subject_name,
            'course_name':course_name,
            'absolute_course_url':absolute_course_url,
        }
    #for loading more courses
        global driver #declared global so that browser window does not close after finishing request.
        driver=webdriver.Chrome('C:/webdrivers/chromedriver')
        driver.get(response.url)
        print(driver.current_url)
        try:
            button_element = driver.find_element_by_id('show-more-courses')
        #button_element.click()
            driver.execute_script("arguments[0].click();",button_element)
            yield Request(response.url,callback=self.parse_subject,dont_filter=True)
        except NoSuchElementException:
            pass

1 个答案:

答案 0 :(得分:0)

我认为,只有在没有解决方案的情况下,才应使用硒。请求库更快,更可靠。这是循环遍历所有页面的一些代码,然后您可以使用Beautiful Soup来解析html。如果需要,您需要先安装Beautiful Soup,然后再使用它。


import requests
from bs4 import BeautifulSoup

for page in range(1, 10): #Change 10 to however many times you need to press "Load Next 50 Courses"
    params={'page': str(page)}
    next_page = requests.get("https://www.classcentral.com/subject/cs", params=params)
    soup = BeautifulSoup(next_page.text, 'html.parser')
    #Parse through html