我的代码不断出现问题。当我终于能够打开网页,单击按钮并下载excel文件时,我陷入了一个错误:找不到此类元素。
我有一个URL列表,Selenium会仔细检查列表,直到遇到一个具有不同配置的URL且找不到“ element”为止。我只想python跳过该网址,然后转到下一个网址。以后,我可以手动返回“破碎网址”。
这是我的代码:
with open('C:/Users/ovtch/PyScript/source.csv', 'r') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
urlList.append(row['URL'])
driver = webdriver.Chrome(executable_path='chromedriver.exe')
def downloadCSV():
count=1
for url in urlList:
driver.get(url)
# Page Scraping Prepare
time.sleep(1)
driver.find_element_by_xpath('my path').click()
time.sleep(1)
driver.find_element_by_xpath('my path').click()
count+=1
"""Show starting time"""
print("Start project time")
print(datetime.datetime.now().time())
downloadCSV()
"""Show end time"""
print("End project time")
print("Success")
time.sleep(8)
#driver.quit()
答案 0 :(得分:0)
您可以使用try / catch并忽略错误并继续循环:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
#your code....
for url in urlList:
try:
driver.get(url)
# Page Scraping Prepare
# Wait until your button is clickable and click
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, "my path"))).click()
except:
continue
答案 1 :(得分:0)
尝试一下,将您的find_element_by_ *方法包装在异常块中,并相应地处理异常
from selenium.common.exceptions import NoSuchElementException
def downloadCSV():
count=1
for url in urlList:
try:
driver.get(url)
# Page Scraping Prepare
time.sleep(1)
driver.find_element_by_xpath('my path').click()
time.sleep(1)
driver.find_element_by_xpath('my path').click()
count+=1
except NoSuchElementException:
pass # or handle the error