我正试图从IB网站上抓取股票行情清单,但是从HTML提取表格信息时遇到问题。
如果我使用
import requests
website_url = requests.get('https://www.interactivebrokers.com/en/index.phpf=2222&exch=mexi&showcategories=STK#productbuffer').text
soup = BeautifulSoup(website_url,'lxml')
My_table = soup.find('div',{'class':'table-responsive no-margin'})
print (My_table)
它捕获HTML数据信息,但是当我尝试将其与下面的代码一起使用时,它不喜欢它,因此,作为一种解决方法,我捕获了HTML Table Data Information并手动对其进行了解析。
我有以下代码:
import pandas as pd
from bs4 import BeautifulSoup
html_string = """
<div class="table-responsive no-margin">
<table width="100%" cellpadding="0" cellspacing="0" border="0"
class="table table-striped table-bordered">
<thead>
<tr>
<th width="15%" align="left" valign="middle"
class="table_subheader">IB Symbol</th>
<th width="55%" align="left" valign="middle" class="table_subheader">Product Description
<span class="text-small">(click link for more details)</span></th>
<th width="15%" align="left" valign="middle" class="table_subheader">Symbol</th>
<th width="15%" align="left" valign="middle" class="table_subheader">Currency</th>
</tr>
</thead>
<tbody>
<tr>
<td>0JN9N</td>
<td><a href="javascript:NewWindow('https://misc.interactivebrokers.com/cstools/contract_info/index2.php?action=Details&site=GEN&conid=189723078','Details','600','600','custom','front');" class="linkexternal">DSV AS</a></td>
<td>0JN9N</td>
<td>MXN</td>
</tr>
<tr>
<td>0QBON</td>
<td><a href="javascript:NewWindow('https://misc.interactivebrokers.com/cstools/contract_info/index2.php?action=Details&site=GEN&conid=189723075','Details','600','600','custom','front');" class="linkexternal">COLOPLAST-B</a></td>
<td>0QBON</td>
<td>MXN</td>
</tr>
<tr>
<td>0R87N</td>
<td><a href="javascript:NewWindow('https://misc.interactivebrokers.com/cstools/contract_info/index2.php?action=Details&site=GEN&conid=195567802','Details','600','600','custom','front');" class="linkexternal">ASSA ABLOY AB-B</a></td>
<td>0R87N</td>
<td>MXN</td>
</tr>
</tbody>
</table>"""
soup = BeautifulSoup(html_string, 'lxml') # Parse the HTML as a string
table = soup.find_all('table')[0] # Grab the first table
new_table = pd.DataFrame(columns=range(0,4), index = [0]) # I know the size
row_marker = 0
for row in table.find_all('tr'):
column_marker = 0
columns = row.find_all('td')
for column in columns:
new_table.iat[row_marker,column_marker] = column.get_text()
column_marker += 1
print(new_table)
但是它仅显示最后一行:
如果我删除最后一部分,并添加以下内容:
soup = BeautifulSoup(html, 'lxml')
table = soup.find("div")
# The first tr contains the field names.
headings = [th.get_text().strip() for th in
table.find("tr").find_all("th")]
print(headings)
datasets = []
for row in table.find_all("tr")[1:]:
df = pd.DataFrame(headings, (td.get_text() for td in
row.find_all("td")))
datasets.append(df)
print(datasets)
df.to_csv('Path_to_file\\test1.csv')
它可以看到其余的项目,但是它完全格式不对,并且在csv中,它仅打印列表的最后一项。
如何直接从网站提取HTML表的详细信息,并以第一张图像的格式将其打印到csv?
答案 0 :(得分:0)
您可以删除row_marker = 0
for row_marker, row in enumerate(table.find_all('tr')):
column_marker = 0
columns = row.find_all('td')
try:
new_table.loc[row_marker] = [column.get_text() for column in columns]
except ValueError:
# It's a safe way when [column.get_text() for column in columns] is empty list.
continue