无法提取html表

时间:2019-04-07 14:57:33

标签: python html web-scraping beautifulsoup

我想从给定网站的表格中使用漂亮的汤和python3收集信息。

我也尝试过使用XPath方法,但仍然无法获得数据。

coaches = 'https://www.badmintonengland.co.uk/coach/find-a-coach'
coachespage = urlopen(coaches)
soup = BeautifulSoup(coachespage,features="html.parser")
data = soup.find_all("tbody", { "id" : "JGrid-az-com-1031-tbody" })

def crawler(table):
    for mytable in table:  
        try:
            rows = mytable.find_all('tr')
            for tr in rows:
                cols = tr.find_all('td')
                for td in cols:
                    return(td.text)
        except:
            raise ValueError("no data")


print(crawler(data))

1 个答案:

答案 0 :(得分:1)

如果您使用硒来进行选择,然后使用pd.read_html page_source来获取表,则这将允许javascript运行并填充值

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd
import time

url = 'https://www.badmintonengland.co.uk/coach/find-a-coach'
driver = webdriver.Chrome()
driver.get(url)
ele = driver.find_element_by_css_selector('.az-triggers-panel a') #distance dropdown
driver.execute_script("arguments[0].scrollIntoView();", ele)
ele.click()
option = WebDriverWait(driver,10).until(EC.presence_of_element_located((By.ID, "comboOption-az-com-1015-8"))) # any distance
option.click()
driver.find_element_by_css_selector('.az-btn-text').click()

time.sleep(5) #seek better wait condition for page update
tables  = pd.read_html(driver.page_source)