从R的雅虎财经中汲取历史性的分析师意见

时间:2011-09-23 15:15:15

标签: r finance quantmod yahoo-finance

雅虎财经对股票有data on historic analyst opinions。我有兴趣将这些数据导入R进行分析,这是我到目前为止所做的:

getOpinions <- function(symbol) {
    require(XML)
    require(xts)
    yahoo.URL <- "http://finance.yahoo.com/q/ud?"
    tables <- readHTMLTable(paste(yahoo.URL, "s=", symbol, sep = ""), stringsAsFactors=FALSE)
    Data <- tables[[11]]
    Data$Date <- as.Date(Data$Date,'%d-%b-%y')
    Data <- xts(Data[,-1],order.by=Data[,1])
    Data
}

getOpinions('AAPL')

我担心如果表(当前为11)的位置发生变化,此代码将会中断,但我想不出一种优雅的方法来检测哪个表具有我想要的数据。我试过了the solution posted here,但它似乎不适合这个问题。

如果雅虎重新安排他们的网站,是否有更好的方法来获取不太可能破坏的数据?

编辑:看起来已经有了一个包(fImport)来做这件事。

library(fImport)
yahooBriefing("AAPL")

这是他们的解决方案,它不返回xts对象,如果页面布局发生变化(fImport中的yahooKeystats函数已经被破坏),它可能会中断:

function (query, file = "tempfile", source = NULL, save = FALSE, 
    try = TRUE) 
{
    if (is.null(source)) 
        source = "http://finance.yahoo.com/q/ud?s="
    if (try) {
        z = try(yahooBriefing(query, file, source, save, try = FALSE))
        if (class(z) == "try-error" || class(z) == "Error") {
            return("No Internet Access")
        }
        else {
            return(z)
        }
    }
    else {
        url = paste(source, query, sep = "")
        download.file(url = url, destfile = file)
        x = scan(file, what = "", sep = "\n")
        x = x[grep("Briefing.com", x)]
        x = gsub("</", "<", x, perl = TRUE)
        x = gsub("/", " / ", x, perl = TRUE)
        x = gsub(" class=.yfnc_tabledata1.", "", x, perl = TRUE)
        x = gsub(" align=.center.", "", x, perl = TRUE)
        x = gsub(" cell.......=...", "", x, perl = TRUE)
        x = gsub(" border=...", "", x, perl = TRUE)
        x = gsub(" color=.red.", "", x, perl = TRUE)
        x = gsub(" color=.green.", "", x, perl = TRUE)
        x = gsub("<.>", "", x, perl = TRUE)
        x = gsub("<td>", "@", x, perl = TRUE)
        x = gsub("<..>", "", x, perl = TRUE)
        x = gsub("<...>", "", x, perl = TRUE)
        x = gsub("<....>", "", x, perl = TRUE)
        x = gsub("<table>", "", x, perl = TRUE)
        x = gsub("<td nowrap", "", x, perl = TRUE)
        x = gsub("<td height=....", "", x, perl = TRUE)
        x = gsub("&amp;", "&", x, perl = TRUE)
        x = unlist(strsplit(x, ">"))
        x = x[grep("-...-[90]", x, perl = TRUE)]
        nX = length(x)
        x[nX] = gsub("@$", "", x[nX], perl = TRUE)
        x = unlist(strsplit(x, "@"))
        x[x == ""] = "NA"
        x = matrix(x, byrow = TRUE, ncol = 9)[, -c(2, 4, 6, 8)]
        x[, 1] = as.character(strptime(x[, 1], format = "%d-%b-%y"))
        colnames(x) = c("Date", "ResearchFirm", "Action", "From", 
            "To")
        x = x[nrow(x):1, ]
        X = as.data.frame(x)
    }
    X
}

1 个答案:

答案 0 :(得分:3)

这是你可以使用的黑客。在您的功能中,添加以下内容

# GET THE POSITION OF TABLE WITH MAX. ROWS
position = which.max(sapply(tables, NROW))
Data     = tables[[position]]

只要页面上最长的表格符合您的要求,这就可以正常工作。

如果你想让它更健壮,这是另一种方法

# GET POSITION OF TABLE CONTAINING RESEARCH FIRM IN ITS NAMES
position = sapply(tables, function(tab) 'Research Firm' %in% names(tab))
Data     = tables[position == TRUE]