如何检查是否在Python中单击Selenium Webdriver更改页面

时间:2014-07-31 17:48:08

标签: python selenium selenium-webdriver phantomjs

我一直在阅读使用Selenium的webdrivers的click()是异步的,所以我一直在尝试让webdriver在执行任何其他操作之前等待点击。我使用PhantomJS作为我的浏览器。

我有WebDriverWait对象来等待页面上的元素发生更改(这就是我在点击某些内容时是否已经加载/更改页面的方式)。我的问题是我一直从WebDriverWait获取TimeoutExceptions。

点击某些内容后,我还能做些什么来等待页面加载吗?我不想使用time.sleep(1)因为似乎有一个可变的加载时间,我不希望它睡得太久。这就是我想明确等待页面加载的原因。

这是我的webdriver代码和相应的等待:

import time
from bs4 import BeautifulSoup
from selenium import webdriver
import selenium.webdriver.support.ui as ui
import selenium.common.exceptions as exceptions

class Webdriver():

    def __init__(self, wait_time=10):
        self.driver = webdriver.PhantomJS()
        self.driver.set_window_size(1200,800)
        self.wait = wait_time

    def click(self, element_xpath, wait_xpath, sleep_time=0):
        wait = ui.WebDriverWait(self.driver, self.wait)
        old_element = self.driver.find_element_by_xpath(wait_xpath)
        old_text = old_element.text
        self.driver.find_element_by_xpath(element_xpath).click()
        wait.until(lambda driver: element_changed(driver, wait_xpath, old_text,20))
        time.sleep(sleep_time)

def element_changed(driver, element_xpath, old_element_text, timeout_seconds=10):
    pause_interval = 1
    t0 = time.time()
    while time.time() - t0 < timeout_seconds:
        try:
            element = driver.find_element_by_xpath(element_xpath)
            if element.text != old_element_text:
                return True
        except exceptions.StaleElementReferenceException:
            return True
        except exceptions.NoSuchElementException:
            pass
        time.sleep(pause_interval)
    return False

以下是运行的代码:

driver = Webdriver()
url = 'http://www.atmel.com/products/microcontrollers/avr/default.aspx?tab=parameters'
wait_xpath = '//*[@id="device-columns"]/tbody/tr[2]/td[1]/div[2]/a'
driver.load(url, wait_xpath)
soup = driver.get_soup()

pages = soup('ul', class_='pagination')[0]('a')
num_pages = len(pages)
products = set()
for i in range(num_pages):
    element_xpath = '//*[@id="top-nav"]/div/ul/li[%d]/a' % (2 + i)
    driver.click(element_xpath, wait_xpath)
    soup = driver.get_soup()
    for tag in soup('td', class_='first-cell'):
        product = tag.find('div', class_='anchor')
        if not product:
            continue
        else:
            if product.find('a'):
                products.add(product.find('a')['href'])

更新

我的部分问题是我正在重新加载第一页并希望它能够改变。但即使这样,在for-loop之后移动点击线和汤线,有时候更换也需要很长时间。

1 个答案:

答案 0 :(得分:1)

我没有使用WebDriverWait,而是让函数等到它被加载。它似乎现在起作用,但我无法帮助,但觉得它不稳定并且不能一直工作。

def click(self, element_xpath, wait_xpath=None, sleep_time=0):
    if wait_xpath:
        old_element = self.driver.find_element_by_xpath(wait_xpath)
        old_text = old_element.text
    self.driver.find_element_by_xpath(element_xpath).click()
    if wait_xpath:
        if not element_changed(self.driver, wait_xpath, old_text):
            log.warn('click did not change element at %s', wait_xpath)
            return False
    time.sleep(sleep_time)
    return True

def element_changed(driver, element_xpath, old_element_text, timeout_seconds=10):
    pause_interval = 1
    t0 = time.time()
    while time.time() - t0 < timeout_seconds:
        try:
            element = driver.find_element_by_xpath(element_xpath)
            if element.text != old_element_text:
                return True
        except exceptions.StaleElementReferenceException:
            return True
        except exceptions.NoSuchElementException:
            pass
        time.sleep(pause_interval)
    return False

运行代码是:

driver = Webdriver()
url = 'http://www.atmel.com/products/microcontrollers/avr/default.aspx?tab=parameters'
wait_xpath = '//*[@id="device-columns"]/tbody/tr[2]/td[1]/div[2]/a'
driver.load(url, wait_xpath)
soup = driver.get_soup()

pages = soup('ul', class_='pagination')[0]('a')
num_pages = len(pages)
products = set()
for i in range(num_pages):
    element_xpath = '//*[@id="top-nav"]/div/ul/li[%d]/a' % (2 + i)
    if i == 0:
        driver.click(element_xpath, None, 1)
    else:
        driver.click(element_xpath, wait_xpath, 1)
    soup = driver.get_soup()
    for tag in soup('td', class_='first-cell'):
        product = tag.find('div', class_='anchor')
        if not product:
            continue
        else:
            if product.find('a'):
                products.add(product.find('a')['href'])