我正在尝试使用bs4
从网页中获取表,并使用pandas
将其获取到csv。
该网页有两个表,我可以获取第一个表,但是只有第二个表的标题会被抓取。
下面是我使用过的代码。
from urllib2 import Request, urlopen
from bs4 import BeautifulSoup
from scrapelib import table_to_2d
import pandas as pd
ehurl = 'https://www.fpi.nsdl.co.in/web/Reports/Latest.aspx'
hd = {'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1;WOW64;rv:46.0) Gecko/46.0 Firefox/46.0'}
raq = Request(ehurl, headers=hd)
resp = urlopen(raq)
eh_page = resp.read()
soup = BeautifulSoup(eh_page, "html.parser")
i=1
for qeros in soup.findAll("table"):
x = table_to_2d(qeros)
df = pd.DataFrame(x)
df.to_csv("fpi" + str(i) + ".csv", sep=",", header=False, index=False)
i += 1
函数table_to_2d
来自https://stackoverflow.com/a/48451104/2724299
答案 0 :(得分:2)
我不确定您希望您的 csv 文件采用的格式,但是您可以尝试执行以下操作将表格放入 csv 文件中:>
from bs4 import BeautifulSoup
from requests import get
from csv import writer
url = 'https://www.fpi.nsdl.co.in/web/Reports/Latest.aspx'
r = get(url)
soup = BeautifulSoup(r.text, 'lxml')
# get all tables
tables = soup.find_all('table')
# loop over each table
for num, table in enumerate(tables, start=1):
# create filename
filename = 'table-%d.csv' % num
# open file for writing
with open(filename, 'w') as f:
# store rows here
data = []
# create csv writer object
csv_writer = writer(f)
# go through each row
rows = table.find_all('tr')
for row in rows:
# write headers if any
headers = row.find_all('th')
if headers:
csv_writer.writerow([header.text.strip() for header in headers])
# write column items
columns = row.find_all('td')
csv_writer.writerow([column.text.strip() for column in columns])
其中给出了以下 table-1.csv
:Daily Trends in FPI Investments on 07-Aug-2018
Reporting Date,Debt/Equity/Hybrid,Investment Route,Gross Purchases(Rs. Crore),Gross Sales (Rs. Crore),Net Investment (Rs. Crore),Net Investment US($) million,Conversion (1 USD TO INR)*
07-Aug-2018,Equity,Stock Exchange,4405.92,3972.93,432.99,63.04,Rs.68.6833
Primary market & others,14.43,0.00,14.43,2.10
Sub-total,4420.35,3972.93,447.42,65.14
Debt,Stock Exchange,465.68,116.77,348.91,50.80
Primary market & others,0.00,3.08,(3.08),(0.45)
Sub-total,465.68,119.85,345.83,50.35
Hybrid,Stock Exchange,1.33,3.93,(2.60),(0.38)
Primary market & others,0.00,0.00,0.00,0.00
Sub-total,1.33,3.93,(2.60),(0.38)
Total,4887.36,4096.71,790.65,115.11
The data presented above is compiled on the basis of reports submitted to depositories by DDPs on 07-Aug-2018 and constitutes trades conducted by FPIs/FIIs on and upto the previous trading day(s).Note
和 table-2.csv :
Daily Trends in FPI Derivative Trades on 07-Aug-2018
Reporting Date,Derivative Products,Buy,Sell
Open Interest at theend of the date
No. of Contracts,Amount in Crore,No. of Contracts,Amount in Crore,No. of Contracts,Amount in Crore
07-Aug-2018,Index Futures,16899.00,1560.45,17802.00,1706.72,298303.00,26117.55
Index Options,505226.00,51512.43,526331.00,53460.93,654904.00,58508.63
Stock Futures,165411.00,11454.08,158928.00,11105.55,1108615.00,82830.85
Stock Options,84583.00,6297.87,86777.00,6441.33,108437.00,8272.44
Interest Rate Futures,0.00,0.00,0.00,0.00,2530.00,47.60
The above report is compiled on the basis of reports submitted to depositories by NSE and BSE on 07-Aug-2018 and constitutes FPIs/FIIs trading / position of the previous trading day.
答案 1 :(得分:0)
对于第二个表,似乎实际的tr
,th
和td
元素未构造在table
标签下。因此,刮除所有tr
,th
和td
标签将产生所需的数据,并且通过应用itertools.groupby
,可以获得原始的表结构。
import requests, itertools
from bs4 import BeautifulSoup as soup
d = soup(requests.get('https://www.fpi.nsdl.co.in/web/Reports/Latest.aspx').text, 'html.parser')
table_data = [[j.text for j in (lambda x:i.find_all('td') if not x else x)(i.find_all('th'))] for i in d.find_all('tr')]
final_table = [list(b) for _, b in itertools.groupby(table_data, key=lambda x:x[0].startswith('Daily Trends'))]
table1, table2 = [final_table[i]+final_table[i+1] for i in range(0, len(final_table), 2)]
输出:
table
:
[['Daily Trends in FPI Investments on 08-Aug-2018'], ['Reporting Date', 'Debt/Equity/Hybrid', 'Investment Route', 'Gross Purchases(Rs. Crore)', 'Gross Sales (Rs. Crore)', 'Net Investment (Rs. Crore)', 'Net Investment US($) million', 'Conversion (1 USD TO INR)*'], ['08-Aug-2018', 'Equity', 'Stock Exchange', '3463.67', '3343.93', '119.74', '17.40', ' Rs.68.8000'], ['Primary market & others', '0.00', '7.23', '(7.23)', '(1.05)'], ['Sub-total', '3463.67', '3351.16', '112.51', '16.35'], ['Debt', 'Stock Exchange', '1213.42', '450.23', '763.19', '110.93'], ['Primary market & others', '40.77', '62.95', '(22.18)', '(3.22)'], ['Sub-total', '1254.19', '513.18', '741.01', '107.71'], ['Hybrid', 'Stock Exchange', '3.99', '6.96', '(2.97)', '(0.43)'], ['Primary market & others', '0.00', '0.00', '0.00', '0.00'], ['Sub-total', '3.99', '6.96', '(2.97)', '(0.43)'], ['Total', '4721.85', '3871.30', '850.55', '123.63'], ['The data presented above is compiled on the basis of reports submitted to depositories by DDPs on 08-Aug-2018 and constitutes trades conducted by FPIs/FIIs on and upto the previous trading day(s).Note']]
table2
:
[['Daily Trends in FPI Derivative Trades on 08-Aug-2018'], ['Reporting Date', 'Derivative Products', 'Buy', 'Sell', 'Open Interest at the'], ['Open Interest at the'], ['No. of Contracts', 'Amount in Crore', 'No. of Contracts', 'Amount in Crore', 'No. of Contracts', 'Amount in Crore'], ['08-Aug-2018', 'Index Futures', '18797.00', '1732.24', '16696.00', '1600.94', '303684.00', '26636.51'], ['Index Options', '495820.00', '50403.69', '512765.00', '52075.29', '673371.00', '60394.18'], ['Stock Futures', '176472.00', '11999.53', '178301.00', '12020.70', '1116162.00', '83275.79'], ['Stock Options', '98471.00', '6949.88', '101906.00', '7204.18', '116286.00', '8824.33'], ['Interest Rate Futures', '0.00', '0.00', '0.00', '0.00', '2530.00', '47.57'], ['The above report is compiled on the basis of reports submitted to depositories by NSE and BSE on 08-Aug-2018 and constitutes FPIs/FIIs trading / position of the previous trading day.']]