使用Beautiful soup
和Pandas
抓取网络以获取表格。其中一个专栏有一些网址。当我将html传递给pandas时,href
将丢失。
有没有办法只为该列保留网址链接?
示例数据(针对更好的案例进行编辑):
<html>
<body>
<table>
<tr>
<td>customer</td>
<td>country</td>
<td>area</td>
<td>website link</td>
</tr>
<tr>
<td>IBM</td>
<td>USA</td>
<td>EMEA</td>
<td><a href="http://www.ibm.com">IBM site</a></td>
</tr>
<tr>
<td>CISCO</td>
<td>USA</td>
<td>EMEA</td>
<td><a href="http://www.cisco.com">cisco site</a></td>
</tr>
<tr>
<td>unknown company</td>
<td>USA</td>
<td>EMEA</td>
<td></td>
</tr>
</table>
</body>
</html>
我的python代码:
file = open(url,"r")
soup = BeautifulSoup(file, 'lxml')
parsed_table = soup.find_all('table')[1]
df = pd.read_html(str(parsed_table),encoding='utf-8')[0]
df
输出(导出为CSV):
customer;country;area;website
IBM;USA;EMEA;IBM site
CISCO;USA;EMEA;cisco site
unknown company;USA;EMEA;
df输出正常但链接丢失。我需要保留链接。至少是URL。
任何提示?
答案 0 :(得分:9)
pd.read_html
假设您感兴趣的数据位于文本中,而不是标记属性。但是,自己刮桌子并不难:
import bs4 as bs
import pandas as pd
with open(url,"r") as f:
soup = bs.BeautifulSoup(f, 'lxml')
parsed_table = soup.find_all('table')[1]
data = [[td.a['href'] if td.find('a') else
''.join(td.stripped_strings)
for td in row.find_all('td')]
for row in parsed_table.find_all('tr')]
df = pd.DataFrame(data[1:], columns=data[0])
print(df)
产量
customer country area website link
0 IBM USA EMEA http://www.ibm.com
1 CISCO USA EMEA http://www.cisco.com
2 unknown company USA EMEA
答案 1 :(得分:4)
只需检查标签是否存在:
import numpy as np
with open(url,"r") as f:
sp = bs.BeautifulSoup(f, 'lxml')
tb = sp.find_all('table')[56]
df = pd.read_html(str(tb),encoding='utf-8', header=0)[0]
df['href'] = [np.where(tag.has_attr('href'),tag.get('href'),"no link") for tag in tb.find_all('a')]
答案 2 :(得分:1)
如果您要从html表中获取多个链接,则这是另一种方法。与其说使列表难以理解,不如说是分离的for循环,这样对于那些python新手来说代码更易读,并且如果出现错误更容易调整代码或处理错误。我希望它能对某人有所帮助。
soup = BeautifulSoup(html, "lxml")
table = table.find('table')
thead = table.find('thead')
column_names = [th.text.strip() for th in thead.find_all('th')]
data = []
for row in table.find_all('tr'):
row_data = []
for td in row.find_all('td'):
td_check = td.find('a')
if td_check is not None:
link = td.a['href']
row_data.append(link)
else:
not_link = ''.join(td.stripped_strings)
if not_link == '':
not_link = None
row_data.append(not_link)
data.append(row_data)
df = pd.DataFrame(data[1:], columns=column_names)
df_dict = df.to_dict('records')
for row in df_dict:
print(row)