从网上提取表格

时间:2018-12-19 16:25:47

标签: python pandas web-scraping beautifulsoup web-crawler

需要从以下<a href="#">Data</a>的{​​{1}}中提取数据。 有什么线索可以将该表提取到DataFrames中吗?

url

3 个答案:

答案 0 :(得分:2)

从多维列表开始,然后将其移植到DataFrame可能会更容易,这样我们就不用假设大小了。 “数据” 超链接引用了div id = 0,因此我们选择其中的所有元素,然后将每一行中的每一列解析为一个列表数组(在其中我称为 elements )将附加到完整列表数组(我称为 fullelements )并为每个新行重置。

from bs4 import BeautifulSoup
import pandas as pd
import requests

url = 'https://docs.google.com/spreadsheets/d/1dgOdlUEq6_V55OHZCxz5BG_0uoghJTeA6f83br5peNs/pub?range=A1:D70&gid=1&output=html#'

r = requests.get(url)
html_doc = r.text
soup = BeautifulSoup(html_doc, features='html.parser')

#print(soup.prettify())
print(soup.title.text)
datadiv=soup.find("div", {"id": "0"})
elementsfull =[]
row=0
for tr in datadiv.findAll("tr"):
    elements=[]
    column=0
    for td in tr.findAll("td"):
        if(td.text!=''):
            elements.append(td.text)
            column+=1
            #print('column: ', column)   

    elementsfull.append(elements)        
    #print('row: ', row)        
    row+=1

mydf = pd.DataFrame(data=elementsfull)
print(mydf)

我测试了此代码,并对照表进行了检查,所以我保证它可以工作。

答案 1 :(得分:0)

import bs4 as bs
import requests
import pandas as pd

url = 'https://docs.google.com/spreadsheets/d/1dgOdlUEq6_V55OHZCxz5BG_0uoghJTeA6f83br5peNs/pub?range=A1:D70&gid=1&output=html#'

r = requests.get(url)
html_doc = r.text
soup = bs.BeautifulSoup(html_doc, features='html.parser')

table = soup.find('table', attrs={'class':'subs noBorders evenRows'})
table_rows = soup.find_all('tr')

list1 = []
for tr in table_rows:
    td = tr.find_all('td')
    row = [tr.text for tr in td]
    list1.append(row)

df=pd.DataFrame(list1)    
df.columns =  df.iloc[1]
#starting from this point,it's just how you want to clean and slice the data
df = df.iloc[3:263]  #check the data to see if you want to only read these
df.dropna(axis='columns', how='all', inplace=True)

答案 2 :(得分:0)

您可以根据需要read_html并处理数据框

import pandas as pd
results = pd.read_html('https://docs.google.com/spreadsheets/d/1dgOdlUEq6_V55OHZCxz5BG_0uoghJTeA6f83br5peNs/pub?range=A1:D70&gid=1&output=html#')
result = results[0].dropna(how='all')
del result[0]
result.dropna(axis='columns', how='all', inplace=True)
result.to_csv(r'C:\Users\User\Desktop\Data.csv', sep=',', encoding='utf_8_sig',index = False, header=None)