我可以通过以下代码从link下载年度数据,但这与网站上显示的数据不同,因为它是6月的数据:
现在我有两个问题:
谢谢。
答案 0 :(得分:1)
很好奇,但是为什么要先将html写入文件,然后再用pandas读取它?熊猫可以直接接受html请求:
import pandas as pd
symbol = 'AAPL'
url = 'https://finance.yahoo.com/quote/%s/financials?p=%s' %(symbol, symbol)
dfs = pd.read_html(url)
print(dfs[0])
其次,不确定为什么要弹出年度日期。像我上面所说的那样,显示9月。
print(dfs[0])
0 ... 4
0 Revenue ... 9/26/2015
1 Total Revenue ... 233715000
2 Cost of Revenue ... 140089000
3 Gross Profit ... 93626000
4 Operating Expenses ... Operating Expenses
5 Research Development ... 8067000
6 Selling General and Administrative ... 14329000
7 Non Recurring ... -
8 Others ... -
9 Total Operating Expenses ... 162485000
10 Operating Income or Loss ... 71230000
11 Income from Continuing Operations ... Income from Continuing Operations
12 Total Other Income/Expenses Net ... 1285000
13 Earnings Before Interest and Taxes ... 71230000
14 Interest Expense ... -733000
15 Income Before Tax ... 72515000
16 Income Tax Expense ... 19121000
17 Minority Interest ... -
18 Net Income From Continuing Ops ... 53394000
19 Non-recurring Events ... Non-recurring Events
20 Discontinued Operations ... -
21 Extraordinary Items ... -
22 Effect Of Accounting Changes ... -
23 Other Items ... -
24 Net Income ... Net Income
25 Net Income ... 53394000
26 Preferred Stock And Other Adjustments ... -
27 Net Income Applicable To Common Shares ... 53394000
[28 rows x 5 columns]
对于第二部分,您可以尝试通过以下几种方式找到数据1:
1)检查XHR请求并通过在生成该数据并可以json格式返回给您的请求url中包含参数来获取所需的数据(当我寻找时,我无法马上找到答案,因此转到下一个选项)
2)搜索<script>
标签,因为json格式有时可以在这些标签内(我没有非常彻底地搜索过,并且认为Selenium将是直接的方法,因为熊猫可以读入表格)
3)使用硒模拟打开浏览器,获取表格,然后单击“季度”,然后获取该表格
我选择了选项3:
from selenium import webdriver
import pandas as pd
symbol = 'AAPL'
url = 'https://finance.yahoo.com/quote/%s/financials?p=%s' %(symbol, symbol)
driver = webdriver.Chrome('C:/chromedriver_win32/chromedriver.exe')
driver.get(url)
# Get Table shown in browser
dfs_annual = pd.read_html(driver.page_source)
print(dfs_annual[0])
# Click "Quarterly"
driver.find_element_by_xpath("//span[text()='Quarterly']").click()
# Get Table shown in browser
dfs_quarter = pd.read_html(driver.page_source)
print(dfs_quarter[0])
driver.close()