从不同页面获取元素时如何避免StaleElementReferenceError?

时间:2019-06-12 11:37:47

标签: python web-scraping selenium-chromedriver

我想获得一场比赛的所有结果。该网站每页显示50行。 我使用selenium导航到下一页(后缀为#page-x的相同URL),但是每当尝试在下一页中查找元素(表的单元格= td)时,都会收到StaleElementReferenceException错误。

我试图在步骤之间关闭驱动程序,一次只获取一个元素列表。我也尝试过用URL +后缀分别加载页面,但是加载不正确。我尝试构建单独的列表(起初我想要一个包含所有结果的大列表)。

from selenium import webdriver
url = "https://tickets.justrun.ca/quidchrono.php?a=qcResult&raceid=8444"

#The block under works well and I get a list of cells as intended.
driver = webdriver.Chrome()
driver.maximize_window()
driver.get(url)
elements = driver.find_elements_by_tag_name("td")
course = []
for i in range(len(elements)):
    course.append(elements[i].text)

to_2 = driver.find_element_by_link_text("2")
to_2.click()
print(driver.current_url)

#I'm trying similar code for the next chunk, but it doesn't work.
elements2 = driver.find_elements_by_tag_name("td")
print(len(elements2))
print(elements2[5].text)
course2 = []
for i in range(len(elements2)):
    course2.append(elements2[i].text)
driver.close()

我希望有一个包含第二页结果的新列表(course2),但出现过时的元素错误。当我打印当前URL时,结果是预期的。当我打印len(elements2)时,也可以。看起来问题出在我尝试获取元素的文本时。

1 个答案:

答案 0 :(得分:1)

解决方案1:

使用BeautifulSoupseleniumWebDriverWait正在等待特定条件的出现,然后再继续执行代码。有关BeautifulSoup的更多详细信息。

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup

url = "https://tickets.justrun.ca/quidchrono.php?a=qcResult&raceid=8444"
driver = webdriver.Chrome()

driver.get(url)

data = []
while True:
    course = []
    WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CLASS_NAME, "tableJustrun")))

    page_soup = BeautifulSoup(driver.page_source, 'lxml')
    # get table data 
    tbody = page_soup.find("tbody",{"id":"searchResultBoxParticipants"})
    rows = tbody.find_all("tr")

    for row in rows:
        rowData = []
        for td in row.find_all("td"):
            rowData.append(td.text)

        course.append(rowData)
    data.append(course)

    try:
        pagination = driver.find_element_by_class_name("simple-pagination")
        next_page = pagination.find_element_by_link_text("Suivant")
        # iterate next page
        next_page.click()
    except Exception as e:
        break

print(data)

解决方案2:

使用pandas库。

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd

url = "https://tickets.justrun.ca/quidchrono.php?a=qcResult&raceid=8444"
driver = webdriver.Chrome()
driver.get(url)

data = []
while True:
    WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CLASS_NAME, "tableJustrun")))
    tables = pd.read_html(driver.page_source)
    #append Participants table data
    data.append(tables[0])

    try:
        pagination = driver.find_element_by_class_name("simple-pagination")
        next_page = pagination.find_element_by_link_text("Suivant")
        # iterate next page
        next_page.click()
    except Exception as e:
        break

#Concat dataframe object
result = pd.concat(data)
print(result)