为什么在按照xpath打开第一个元素后循环不起作用? 我得到以下异常
raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element:{"method":"xpath","selector":"//[@id='searchresults']/tbody/tr[2]/td[1]"} Stacktrace: at FirefoxDriver.prototype.findElementInternal_ (file:///c:/users/home/appdata/local/temp/tmpeglp49/extensions/fxdriver@googlecode.com/components/driver-component.js:10723) at FirefoxDriver.prototype.findElement (file:///c:/users/home/appdata/local/temp/tmpeglp49/extensions/fxdriver@googlecode.com/components/driver-component.js:10732) at DelayedCommand.prototype.executeInternal_/h (file:///c:/users/home/appdata/local/temp/tmpeglp49/extensions/fxdriver@googlecode.com/components/command-processor.js:12614) at DelayedCommand.prototype.executeInternal_ (file:///c:/users/home/appdata/local/temp/tmpeglp49/extensions/fxdriver@googlecode.com/components/command-processor.js:12619) at DelayedCommand.prototype.execute/< (file:///c:/users/home/appdata/local/temp/tmpeglp49/extensions/fxdriver@googlecode.com/components/command-processor.js:12561)
代码:
from selenium import webdriver
from texttable import len
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
driver=webdriver.Firefox()
driver.get('https://jobs.ericsson.com/search/')
driver.maximize_window()
driver.find_element_by_css_selector('[type="text"][id="keywordsearch-q"]').send_keys('Python')
driver.find_element_by_css_selector('[class="btn"][type="submit"]').click()
i=len("//*[@id='searchresults']/tbody/tr/td")
for j in range(1,i+1):
driver.find_element_by_xpath("//*[@id='searchresults']/tbody/tr[%d]/td[1]"%j).click()
print driver.find_element_by_id("job-title").text
driver.back()
continue
问题2: 为什么列表的长度显示为12,但它们只有5个herf元素?
from selenium import webdriver
from texttable import len
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
driver=webdriver.Firefox()
driver.delete_all_cookies()
driver.get('https://jobs.ericsson.com/search/')
driver.maximize_window()
driver.find_element_by_css_selector('[type="text"][id="keywordsearch-q"]').send_keys('Python')
driver.find_element_by_css_selector('[class="btn"][type="submit"]').click()
#currenturl = driver.current_url
pages=driver.find_elements_by_css_selector('a[rel="nofollow"]')
print pages
print 'Its working'
pages1=[]
for page1 in pages:
pages1.append(page1.get_attribute('href'))
print int(len(pages1))
问题3: 如何获取html标签下的元素
a。如何在b标签下分别获得1-25和104?
请参阅网址:https://jobs.ericsson.com/search/?q=Python(结果部分显示在页面底部)
<div class="paginationShell clearfix" lang="en_US" xml:lang="en_US">
<div class="pagination well well-small">
<span class="pagination-label-row">
<span class="paginationLabel">
Results
<b>1 – 25</b>
of
<b>104</b>
</span>
b。如何从html获取作业ID?
<div class="job">
<span itemprop="description">
<b>Req ID:</b>
128378
<br/>
<br/>
答案 0 :(得分:0)
请尝试以下:
for job in range(len(driver.find_elements_by_class_name('jobTitle-link'))):
driver.implicitly_wait(5)
driver.find_elements_by_class_name('jobTitle-link')[job].click()
print driver.find_element_by_id("job-title").text
driver.back()
答案 1 :(得分:0)
这可能对您有所帮助,但根据我自己的经验,当我的页面未完全加载时,我通常会遇到此错误。在搜索元素之前添加time.sleep(1)
通常为我解决了问题(如果代码正确的话)。
import time
#Skip your other code
for j in range(1,i+1):
time.sleep(1)
driver.find_element_by_xpath("//*[@id='searchresults']/tbody/tr[%d]/td[1]"%j).click()
print driver.find_element_by_id("job-title").text
driver.back()
continue
答案 2 :(得分:0)
这是一个有效的解决方案,其想法不是点击每个链接而是将网址存储在列表中然后导航到它:
from selenium import webdriver
from texttable import len
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
driver=webdriver.Firefox()
driver.get('https://jobs.ericsson.com/search/')
driver.maximize_window()
driver.find_element_by_css_selector('[type="text"][id="keywordsearch-q"]').send_keys('Python')
driver.find_element_by_css_selector('[class="btn"][type="submit"]').click()
#To further process preserve the current url
currenturl = driver.current_url
#Get all the elements by class name
jobs = driver.find_elements_by_class_name('jobTitle-link')
jobslink = []
#Get hyperlink urls from the jobs elements
#This way we avoid clicking each link and going back to the previous page
for job in jobs:
jobslink.append(job.get_attribute('href'))
#Get each element page
for job in jobslink:
driver.get(job)
print driver.find_element_by_id("job-title").text