Python Scrapy - Selenium - 请求下一页

时间:2017-06-14 12:17:30

标签: python selenium scrapy

我试图制作一个转到链接的网络抓取工具,并等待加载Javascript内容。然后它应该获得列出的文章的所有链接,然后继续下一页。问题是它总是从第一个网址(" https://techcrunch.com/search/heartbleed")而不是跟随我提供的网址。为什么下面的代码没有从我在reqeusts中传递的新网址中删除?我没有想法......

import scrapy
from scrapy.http.request import Request
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import WebDriverWait
import time


class TechcrunchSpider(scrapy.Spider):
    name = "techcrunch_spider_performance"
    allowed_domains = ['techcrunch.com']
    start_urls = ['https://techcrunch.com/search/heartbleed']



    def __init__(self):
        self.driver = webdriver.PhantomJS()
        self.driver.set_window_size(1120, 550)
        #self.driver = webdriver.Chrome("C:\Users\Daniel\Desktop\Sonstiges\chromedriver.exe")
        self.driver.wait = WebDriverWait(self.driver, 5)    #wartet bis zu 5 sekunden

    def parse(self, response):
        start = time.time()     #ZEITMESSUNG
        self.driver.get(response.url)

        #wartet bis zu 5 sekunden(oben definiert) auf den eintritt der condition, danach schmeist er den TimeoutException error
        try:    

            self.driver.wait.until(EC.presence_of_element_located(
                (By.CLASS_NAME, "block-content")))
            print("Found : block-content")

        except TimeoutException:
            self.driver.close()
            print(" block-content NOT FOUND IN TECHCRUNCH !!!")


        #Crawle durch Javascript erstellte Inhalte mit Selenium

        ahref = self.driver.find_elements(By.XPATH,'//h2[@class="post-title st-result-title"]/a')

        hreflist = []
        #Alle Links zu den jeweiligen Artikeln sammeln
        for elem in ahref :
            hreflist.append(elem.get_attribute("href"))


        for elem in hreflist :
            print(elem)
            yield scrapy.Request(url=elem , callback=self.parse_content)


        #Den link fuer die naechste seite holen
        try:    
            next = self.driver.find_element(By.XPATH,"//a[@class='page-link next']")
            nextpage = next.get_attribute("href")
            print("JETZT KOMMT NEXT :")
            print(nextpage)
            #newresponse = response.replace(url=nextpage)
            yield scrapy.Request(url=nextpage, dont_filter=False)

        except TimeoutException:
            self.driver.close()
            print(" NEXT NOT FOUND(OR EOF) IM CLOSING MYSELF !!!")



        end = time.time()
        print("Time elapsed : ")
        finaltime = end-start
        print(finaltime)


    def parse_content(self, response):    
        title = self.driver.find_element(By.XPATH,"//h1")
        titletext = title.get_attribute("innerHTML")
        print(" h1 : ")
        print(title)
        print(titletext)

1 个答案:

答案 0 :(得分:1)

第一个问题是:

for elem in hreflist :
        print(elem)
        yield scrapy.Request(url=elem , callback=self.parse_content)

此代码向所有找到的链接发出scrapy请求。但是:

def parse_content(self, response):    
    title = self.driver.find_element(By.XPATH,"//h1")
    titletext = title.get_attribute("innerHTML")

parse_content函数尝试使用驱动程序来解析页面。您可以尝试使用scrapy中的response元素进行解析,或者使用webdriver加载页面(self.driver.get(....))

此外,scrapy是异步的,而selenium则不是。 scrapy继续执行代码,而不是在scrapy yield Request之后阻塞,因为它构建在twisted上并且可以启动多个并发请求。 selenium驱动程序实例将无法跟踪scrapy的多个并发请求。 (一个方法是用硒代码替换每个产量,即使这意味着失去执行时间)