硒刮不干净导致网页刮不成功

时间:2020-01-23 13:22:07

标签: python selenium web-scraping scrapy

我正在尝试使用selenium + scrapy抓取this页面(进一步,主页)。

向下滚动页面时,此处的所有内容都将使用javascript加载。我用parse方法(主页上的a.product-list__item.normal.size-normal链接)抓取了每个特定的产品页面。我找到了向下滚动解决方案here,但是它似乎不起作用。调用webdriver方法(ScrollUntilLoaded方法)后,start_request中仅出现29个URL标记。所有产品页面也都由webdriver处理,因为它们是由javascript(parse方法)加载的。

但这不是唯一的问题。在这29页中,只有24页数据被爬网。因此,我添加了wait.until产品的图片,然后再从页面提取数据。但这没有帮助。

这种行为可能是什么原因?硒或网站本身是什么问题?

import time
import scrapy
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException

class SilpoSpider(scrapy.Spider):
    name = 'SilpoSpider'

    def __init__(self):
        self.driver = webdriver.Chrome()
        self.wait = WebDriverWait(self.driver, 10)

    def ScrollUntilLoaded(self):
        """scroll webdriver`s content (web page) to the bottom
        the purpose of this method is to load all content that loads with javascript"""
        check_height = self.driver.execute_script("return document.body.scrollHeight;")
        while True:
            self.driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
            try:
                self.wait.until(lambda driver: self.driver.execute_script("return document.body.scrollHeight;")  > check_height)
                check_height = self.driver.execute_script("return document.body.scrollHeight;") 
            except TimeoutException:
                break

    def start_requests(self):
        # load all content from the page with references to all products
        self.main_url = 'https://silpo.ua/offers'
        self.driver.get(self.main_url)
        self.ScrollUntilLoaded()
        # get all URLs to all particular products pages
        urls = [ref.get_attribute('href') \
            for ref in self.driver.find_elements_by_css_selector('a.product-list__item.normal.size-normal')]
        # len(urls) == 29
        for url in urls:
            yield scrapy.Request(url=url, callback=self.parse)

        self.driver.quit()

    def parse(self, response):
        self.driver.get(response.url)
        self.wait.until(
            EC.presence_of_element_located((By.CSS_SELECTOR, ".image-holder img"))
        )
        yield {"image": self.driver.find_element_by_css_selector(".image-holder img").get_attribute('src'),
            "name": self.driver.find_element_by_css_selector('h1.heading3.product-preview__title span').text,
            "banknotes": int(self.driver.find_element_by_css_selector('.product-price__integer').text),
            "coins": int(self.driver.find_element_by_css_selector('.product-price__fraction').text),
            "old_price": float(self.driver.find_element_by_css_selector('.product-price__old').text),
            "market":"silpo"
            }

1 个答案:

答案 0 :(得分:1)

完全摆脱现有的ScrollUntilLoaded()方法,然后尝试以下方法代替它。事实证明,上述方法根本没有滚动。如果您给该页面加载更长的时间,那会更好。

def ScrollUntilLoaded(self):
    while True:
        footer = self.wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "h4.footer__site-map-heading")))
        current_len = len(self.wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "a.product-list__item"))))
        try:
            self.driver.execute_script("arguments[0].scrollIntoView();", footer)
            self.wait.until(lambda driver: len(self.driver.find_elements_by_css_selector("a.product-list__item")) > current_len)
        except TimeoutException:
            break