如何使用请求或其他模块从URL不变的页面获取数据?

时间:2019-01-08 16:34:37

标签: python beautifulsoup python-requests

我当前正在使用selenium转到页面:

https://www.nseindia.com/products/content/derivatives/equities/historical_fo.htm

然后选择相关选项,然后单击Get Data按钮。

然后检索使用BeautifulSoup生成的表。

在这种情况下是否可以使用请求?如果是这样,是否有人可以向我推荐教程?

2 个答案:

答案 0 :(得分:3)

选择选项时,您或多或少地只是为获取数据按钮设置参数以向其后端发出请求。如果像这种卷曲一样模仿请求:

curl 'https://www.nseindia.com/products/dynaContent/common/productsSymbolMapping.jsp?instrumentType=FUTIDX&symbol=NIFTYMID50&expiryDate=31-12-2020&optionType=select&strikePrice=&dateRange=day&fromDate=&toDate=&segmentLink=9&symbolCount=' -H 'Pragma: no-cache' -H 'Accept-Encoding: gzip, deflate, br' -H 'Accept-Language: en-US,en;q=0.9' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36' -H 'Accept: */*' -H 'Referer: https://www.nseindia.com/products/content/derivatives/equities/historical_fo.htm' -H 'X-Requested-With: XMLHttpRequest' -H 'Connection: keep-alive' -H 'Cache-Control: no-cache' --compressed

然后您可以在请求中执行相同的操作:

import requests

headers = {
    'Pragma': 'no-cache',
    'Accept-Encoding': 'gzip, deflate, br',
    'Accept-Language': 'en-US,en;q=0.9',
    'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36',
    'Accept': '*/*',
    'Referer': 'https://www.nseindia.com/products/content/derivatives/equities/historical_fo.htm',
    'X-Requested-With': 'XMLHttpRequest',
    'Connection': 'keep-alive',
    'Cache-Control': 'no-cache',
}

params = (
    ('instrumentType', 'FUTIDX'),
    ('symbol', 'NIFTYMID50'),
    ('expiryDate', '31-12-2020'),
    ('optionType', 'select'),
    ('strikePrice', ''),
    ('dateRange', 'day'),
    ('fromDate', ''),
    ('toDate', ''),
    ('segmentLink', '9'),
    ('symbolCount', ''),
)

response = requests.get('https://www.nseindia.com/products/dynaContent/common/productsSymbolMapping.jsp', headers=headers, params=params)

一个学习如何执行此操作的好网站:

https://curl.trillworks.com/

答案 1 :(得分:2)

您必须通过在query字典中尝试不同的值来对其进行测试,但是我能够使用用于获取http请求的格式的URL来获取表

import requests
import pandas as pd

query = {  # just mimicking sample query that I saw after loading link
'instrumentType': 'OPTIDX',
'symbol': 'BANKNIFTY',
'expiryDate': 'select',
'optionType': 'CE',
'strikePrice': '23700',
'dateRange': '',
'fromDate': '05-06-2017',
'toDate': '08-06-2017',
'segmentLink': '9',
'symbolCount': '',
}





url = 'https://www.nseindia.com/products/dynaContent/common/productsSymbolMapping.jsp?\
instrumentType=%s\
&symbol=%s\
&expiryDate=%s\
&optionType=%s\
&strikePrice=%s\
&dateRange=%s\
&fromDate=%s\
&toDate=%s\
&segmentLink=%s\
&symbolCount=%s' %(query['instrumentType'],
  query['symbol'],
  query['expiryDate'],
  query['optionType'],
  query['strikePrice'],
  query['dateRange'],
  query['fromDate'],
  query['toDate'],
  query['segmentLink'],
  query['symbolCount']
  )

response = requests.get(url)

table = pd.read_html(response.text)
table[0]

输出:

0   Historical Contract-wise Price Volume Data        ...                      NaN
1                                       Symbol        ...         Underlying Value
2                                    BANKNIFTY        ...                 23459.65
3                                    BANKNIFTY        ...                 23459.65
4                                    BANKNIFTY        ...                 23459.65
5                                    BANKNIFTY        ...                 23459.65
...

[42 rows x 17 columns]