如何使用Python从具有下拉字段的Web链接中读取数据

时间:2019-05-17 09:00:59

标签: python python-3.x pandas web-scraping

我想使用Python从下面的链接读取数据并链接到pandas数据框。

url ='https://www.nseindia.com/products/content/derivatives/equities/historical_fo.htm'

它具有一些下拉字段,例如选择工具,选择交易品种,选择年份,选择到期日,选择期权类型,输入行使价,选择时间段等。

NSE Page

我想将输出发送到pandas数据框以进行进一步处理。

2 个答案:

答案 0 :(得分:1)

在Chrome / Firefox中使用"Network"中的DevTool,我可以看到从浏览器到服务器的所有请求。当我单击“获取数据”时,我会看到带有

下拉字段中的选项的网址

https://www.nseindia.com/products/dynaContent/common/productsSymbolMapping.jsp?instrumentType=FUTIDX&symbol=NIFTY&expiryDate=select&optionType=select&strikePrice=&dateRange=day&fromDate=&toDate=&segmentLink=9&symbolCount=

通常,我可以直接在pd.read_html("https://...")中使用url来获取HTML中的所有表,后来我可以使用[0]来获取第一个表作为DataFrame。

因为出现错误,所以我使用模块requests获取HTML,然后使用pd.read_html("string_with_html")将HTML中的所有表转换为DataFrames。

它为我DataFrame提供了多级列索引和3个我知道的未知列。

代码注释中的更多信息

import requests
import pandas as pd

# create session to get and keep cookies
s = requests.Session()

# get page and cookies 
url = 'https://www.nseindia.com/products/content/derivatives/equities/historical_fo.htm'
s.get(url)

# get HTML with tables
url = "https://www.nseindia.com/products/dynaContent/common/productsSymbolMapping.jsp?instrumentType=FUTIDX&symbol=NIFTY&expiryDate=select&optionType=select&strikePrice=&dateRange=day&fromDate=&toDate=&segmentLink=9&symbolCount="
headers = {
    'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0',
    'X-Requested-With': 'XMLHttpRequest',
    'Referer': 'https://www.nseindia.com/products/content/derivatives/equities/historical_fo.htm'
}

# get HTML from url    
r = requests.get(url, headers=headers)
print('status:', r.status_code)
#print(r.text)

# user pandas to parse tables in HTML to DataFrames
all_tables = pd.read_html(r.text)
print('tables:', len(all_tables))


# get first DataFrame
df = all_tables[0]
#print(df.columns)

# drop multilevel column index
df.columns = df.columns.droplevel()
#print(df.columns)

# droo unknow columns
df = df.drop(columns=['Unnamed: 14_level_1', 'Unnamed: 15_level_1', 'Unnamed: 16_level_1'])
print(df.columns)

结果

Index(['Symbol', 'Date', 'Expiry', 'Open', 'High', 'Low', 'Close', 'LTP',
       'Settle Price', 'No. of contracts', 'Turnover * in Lacs', 'Open Int',
       'Change in OI', 'Underlying Value'],
      dtype='object')


  Symbol         Date       Expiry  ...  Open Int  Change in OI  Underlying Value
0  NIFTY  16-May-2019  30-May-2019  ...  15453150       -242775           11257.1
1  NIFTY  16-May-2019  27-Jun-2019  ...   1995975        383250           11257.1
2  NIFTY  16-May-2019  25-Jul-2019  ...    116775          2775           11257.1

[3 rows x 14 columns]

答案 1 :(得分:0)

import requests
import pandas as pd

#############################################
pd.set_option('display.max_rows', 500000)
pd.set_option('display.max_columns', 100)
pd.set_option('display.width', 50000)
#############################################

# create session to get and keep cookies
s = requests.Session()

# get page and cookies
url = 'https://www.nseindia.com/products/content/derivatives/equities/historical_fo.htm'
s.get(url)

# get HTML with tables
symbol = ['SBIN']

dates = ['17-May-2019']

url = "https://www.nseindia.com/products/dynaContent/common/productsSymbolMapping.jsp?instrumentType=OPTSTK&symbol=" + symbol[0] + "&expiryDate=select&optionType=CE&strikePrice=&dateRange=day&fromDate=" + dates[0] + "&toDate=" + dates[0] + "&segmentLink=9&symbolCount="
# print(url)

headers = {
    'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0',
    'X-Requested-With': 'XMLHttpRequest',
    'Referer': 'https://www.nseindia.com/products/content/derivatives/equities/historical_fo.htm'
}

# get HTML from url
r = requests.get(url, headers=headers)
# print('status:', r.status_code)
# print(r.text)

# user pandas to parse tables in HTML to DataFrames
all_tables = pd.read_html(r.text)
# print('tables:', len(all_tables))


# get first DataFrame
df = all_tables[0]
# print(df.columns)

df = df.rename(columns=df.iloc[1]).drop(df.index[0])
df = df.iloc[1:].reset_index(drop=True)

df = df[['Symbol','Date','Expiry','Optiontype','Strike Price','Close','LTP','No. of contracts','Open Int','Change in OI','Underlying Value']]
print(df)