如何使用Python从链接中解析表

时间:2018-11-05 08:14:25

标签: python selenium beautifulsoup python-requests

我正在尝试从链接中解析表,但无法获取它。 我尝试过:

from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
import os
chrome_options = Options()
chrome_options.add_argument("--window-size=1200x1900")
chrome_driver = os.getcwd() + "/chromedriver"
driver = webdriver.Chrome(chrome_options=chrome_options, executable_path=chrome_driver)
url = "http://www.stats.gov.cn/was5/web/search?channelid=288041&andsen=流通领域重要生产资料市场价格变动情况"
driver.get(url)
soup = BeautifulSoup(driver.page_source, 'html.parser')
#print(soup)

driver.close()


for href in soup.find_all(class_='searchresulttitle'):
    #print(href)
    links = href.attrs['href']
    print(links)

使用此方法,我只能获取链接,但是只能从中获取链接。如何使用Python从每个链接中获取表格并存储在excel文件中。

3 个答案:

答案 0 :(得分:0)

您应该等待,直到生成这些链接:

from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

driver.get(url)
links = WebDriverWait(driver, 10).until(EC.presence_of_all_elements_located((By.CLASS_NAME, searchresulttitle)))
refs = [link.get_attribute('href') for link in links]

答案 1 :(得分:0)

您快到了。由于Web App已启用JavaScript,因此您需要诱使 WebDriverWait 使元素在HTML DOM中可见,然后您可以使用 BeautifulSoup 来解析和打印 href 属性,如下所示:

  • 代码块:

    # -*- coding: UTF-8 -*-
    from selenium import webdriver
    from selenium.webdriver.chrome.options import Options
    from selenium.webdriver.common.by import By
    from selenium.webdriver.support.ui import WebDriverWait
    from selenium.webdriver.support import expected_conditions as EC
    from bs4 import BeautifulSoup
    
    my_url = 'http://www.stats.gov.cn/was5/web/search?channelid=288041&andsen=??????????????????'
    options = Options()
    options.add_argument("disable-infobars")
    options.add_argument("--disable-extensions")
    driver = webdriver.Chrome(chrome_options=options, executable_path="C:\\Utility\\BrowserDrivers\\chromedriver.exe", )
    driver.get(my_url)
    WebDriverWait(driver, 10).until(EC.presence_of_all_elements_located((By.XPATH, "//a[@class='searchresulttitle']")))
    soup = BeautifulSoup(driver.page_source, 'html.parser')
    driver.quit()
    for href in soup.find_all("a",{"class":"searchresulttitle"}):
        print(href.attrs['href'])
    
  • 控制台输出:

    http://www.stats.gov.cn/tjsj/zxfb/201811/t20181105_1631364.html
    http://www.stats.gov.cn/tjsj/zxfb/201811/t20181105_1631364.html
    http://www.stats.gov.cn/tjsj/zxfb/201810/t20181024_1629464.html
    http://www.stats.gov.cn/tjsj/zxfb/201810/t20181024_1629464.html
    http://www.stats.gov.cn/tjsj/zxfb/201810/t20181015_1627579.html
    http://www.stats.gov.cn/tjsj/zxfb/201810/t20181015_1627579.html
    http://www.stats.gov.cn/tjsj/zxfb/201810/t20181009_1626612.html
    http://www.stats.gov.cn/tjsj/zxfb/201810/t20181009_1626612.html
    http://www.stats.gov.cn/tjsj/zxfb/201809/t20180925_1624525.html
    http://www.stats.gov.cn/tjsj/zxfb/201809/t20180925_1624525.html
    http://www.stats.gov.cn/tjsj/zxfb/201809/t20180914_1622865.html
    http://www.stats.gov.cn/tjsj/zxfb/201809/t20180914_1622865.html
    http://www.stats.gov.cn/tjsj/zxfb/201809/t20180904_1620652.html
    http://www.stats.gov.cn/tjsj/zxfb/201809/t20180904_1620652.html
    http://www.stats.gov.cn/tjsj/zxfb/201808/t20180824_1618797.html
    http://www.stats.gov.cn/tjsj/zxfb/201808/t20180824_1618797.html
    http://www.stats.gov.cn/tjsj/zxfb/201808/t20180814_1615716.html
    http://www.stats.gov.cn/tjsj/zxfb/201808/t20180814_1615716.html
    http://www.stats.gov.cn/tjsj/zxfb/201808/t20180806_1614209.html
    http://www.stats.gov.cn/tjsj/zxfb/201808/t20180806_1614209.html
    

答案 2 :(得分:0)

我正在做没有硒的事。这似乎使我更容易。问题是javascript正在运行,但是脚本数据打印在html中很奇怪。我用正则表达式将其拉出。

from bs4 import BeautifulSoup
import requests
import re
import time

urls = []

headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}

url = 'http://www.stats.gov.cn/was5/web/search?channelid=288041&andsen=%E6%B5%81%E9%80%9A%E9%A2%86%E5%9F%9F%E9%87%8D%E8%A6%81%E7%94%9F%E4%BA%A7%E8%B5%84%E6%96%99%E5%B8%82%E5%9C%BA%E4%BB%B7%E6%A0%BC%E5%8F%98%E5%8A%A8%E6%83%85%E5%86%B5'

page = requests.get(url, headers=headers)
soup = BeautifulSoup(page.text, 'lxml')

centerColumn = soup.findAll('span', class_='cont_tit')

for eachSpan in centerColumn:
    match = re.findall('http://www.stats.gov.cn/.+.html', str(eachSpan))
    if match != [] and match not in urls:
        urls.append(match)

for each in urls:
    #If you want to scrape the tables on each page. Assuming they are all the same. Just comment out the print statement and uncomment the other stuff.
    print(each)
    #page = requests.get(each, headers=headers)
    #soup = BeautifulSoup(page.text, 'lxml')
    #middleTable = soup.find('table', class_='MsoNormalTable')
    #rows = middleTable.findAll('tr')
    #for eachRow in rows:
        #print(eachRow.text)
    #time.sleep(1)
Output = 
['http://www.stats.gov.cn/tjsj/zxfb/201811/t20181105_1631364.html']
['http://www.stats.gov.cn/tjsj/zxfb/201810/t20181024_1629464.html']
['http://www.stats.gov.cn/tjsj/zxfb/201810/t20181015_1627579.html']
['http://www.stats.gov.cn/tjsj/zxfb/201810/t20181009_1626612.html']
['http://www.stats.gov.cn/tjsj/zxfb/201809/t20180925_1624525.html']
['http://www.stats.gov.cn/tjsj/zxfb/201809/t20180914_1622865.html']
['http://www.stats.gov.cn/tjsj/zxfb/201809/t20180904_1620652.html']
['http://www.stats.gov.cn/tjsj/zxfb/201808/t20180824_1618797.html']
['http://www.stats.gov.cn/tjsj/zxfb/201808/t20180814_1615716.html']
['http://www.stats.gov.cn/tjsj/zxfb/201808/t20180806_1614209.html']