我正在尝试访问此网站上的数据:http://surge.srcc.lsu.edu/s1.html。 到目前为止,我的代码循环通过两个下拉菜单,但表是动态命名的,我无法从中获取数据。我试图通过上面的类来访问数据" output_data_table"但是遇到了麻烦。
# importing libraries
from selenium import webdriver
import time
from selenium.webdriver.support.ui import Select
import lxml.html
driver = webdriver.Firefox()
driver.get("http://surge.srcc.lsu.edu/s1.html")
# definition for switching frames
def frame_switch(css_selector):
driver.switch_to.frame(driver.find_element_by_css_selector(css_selector))
frame_switch("iframe")
html_source = driver.page_source
nameSelect = Select(driver.find_element_by_xpath('//select[@id="storm_name"]'))
stormCount = len(nameSelect.options)
for i in range(1, stormCount):
print("starting loop on option storm " + nameSelect.options[i].text)
nameSelect.select_by_index(i)
time.sleep(3)
yearSelect = Select(driver.find_element_by_xpath('//select[@id="year"]'))
yearCount = len(yearSelect.options)
for j in range(1, yearCount):
print("starting loop on option year " + yearSelect.options[j].text)
yearSelect.select_by_index(j)
root = lxml.html.fromstring(driver.page_source)
#table=driver.find_element_by_id("output_data_table")
for row in root.xpath('.//table[@id="output_data_table"]//tr'):
# needs dynamic table name
cells = row.xpath('.//td/text()')
dict_value = {'0th': cells[0],
'1st': cells[1],
'2nd': cells[2],
'3rd': cells[3],
'4th': cells[5],
'5th': cells[6],
'6th': cells[7],
'7th': cells[8]}
print(dict_value)
答案 0 :(得分:0)
好像你必须在调用“root = lxml.html.fromstring(driver.page_source)”之前等待。
如果您没有等待,则在没有javascript生成表的情况下获取html源代码。在它之前放一个“time.sleep(10)”。
这似乎得到了表格。我使用BeautifulSoup作为一个简单的例子。
from selenium import webdriver
import time, re
from selenium.webdriver.support.ui import Select
import lxml.html
from bs4 import BeautifulSoup
driver = webdriver.Firefox()
driver.get("http://surge.srcc.lsu.edu/s1.html")
# definition for switching frames
def frame_switch(css_selector):
driver.switch_to.frame(driver.find_element_by_css_selector(css_selector))
frame_switch("iframe")
html_source = driver.page_source
nameSelect = Select(driver.find_element_by_xpath('//select[@id="storm_name"]'))
stormCount = len(nameSelect.options)
for i in range(1, stormCount):
print("starting loop on option storm " + nameSelect.options[i].text)
nameSelect.select_by_index(i)
time.sleep(3)
yearSelect = Select(driver.find_element_by_xpath('//select[@id="year"]'))
yearCount = len(yearSelect.options)
for j in range(1, yearCount):
print("starting loop on option year " + yearSelect.options[j].text)
yearSelect.select_by_index(j)
time.sleep(10)
soup = BeautifulSoup(driver.page_source, 'html.parser')
# get the needed table body
print soup.find_all("tbody", {"class" : re.compile(".*")})[1].prettify()
# print out each column
get_table = soup.find_all("tbody", {"class" : re.compile(".*")})[1]
columns = get_table.find_all("tr")
for column in columns:
print column.getText()