雅虎金融报废数据

时间:2019-12-08 13:32:06

标签: python web-scraping

我想从Yahoo Finance报废公司的国家-美国(位于Yahoo Finance的个人资料页面上)。链接是:

https://finance.yahoo.com/quote/AAPL/profile?p=AAPL 

我尝试了此代码,但无法提取它。我是剪贴数据的新手,如果能为您提供帮助,我们将不胜感激。

我的代码:

import requests
from lxml import html

xp = "//span[text()='Sector']/following-sibling::span[1]"

symbol = 'AAPL'

url = 'https://finance.yahoo.com/quote/' + symbol + '/profile?p=' + symbol

page = requests.get(url)
tree = html.fromstring(page.content)

d = {}

我更喜欢lxm和请求,并且没有与beautifulsoup合作,所以更喜欢在代码库中指出。

不胜感激。

3 个答案:

答案 0 :(得分:2)

也许您可以将BeautifulSoup与Regex Search结合使用以过滤位置:

import requests
from lxml import html
from bs4 import BeautifulSoup
import re

xp = "//span[text()='Sector']/following-sibling::span[1]"
symbol = 'TEVA'
url = 'https://finance.yahoo.com/quote/' + symbol + '/profile?p=' + symbol

page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
baseTag = soup.findAll('p', {'class':"D(ib) W(47.727%) Pend(40px)"})
matches = re.findall("\ -->(.*?)\<!--", str(baseTag))
print(matches[-1])

我在Google(GOOG),Apple(APPL)和Teva Pharmaceutical Industries Limited(TEVA)上进行了测试,

答案 1 :(得分:1)

看看这是否对您有用:

xpp = tree.xpath('//div[@data-reactid=7]/p/text()[3]')[0].strip()
xpp

输出:

  

“美国”

答案 2 :(得分:0)

请勿抓取,而应使用定期更新并简化所有操作的yfinance

import yfinance as yf
df = yf.download('TWTR')

如果要绘制它:

import finplot as fplt
fplt.candlestick_ochl(df[['Open','Close','High','Low']])
fplt.show()

enter image description here