我已经编写了一个函数,可以从Google获取和解析给定股票代码的新闻数据,但我确信有一些方法可以改进。对于初学者,我的函数返回GMT时区中的一个对象,而不是用户的当前时区,如果传递的数字大于299,则会失败(可能是因为google每个库存只返回300个故事)。这在堆栈溢出方面有点in response to my own question,并且在很大程度上依赖于this blog post。
tl;博士:我该如何改进这个功能?
getNews <- function(symbol, number){
# Warn about length
if (number>300) {
warning("May only get 300 stories from google")
}
# load libraries
require(XML); require(plyr); require(stringr); require(lubridate);
require(xts); require(RDSTK)
# construct url to news feed rss and encode it correctly
url.b1 = 'http://www.google.com/finance/company_news?q='
url = paste(url.b1, symbol, '&output=rss', "&start=", 1,
"&num=", number, sep = '')
url = URLencode(url)
# parse xml tree, get item nodes, extract data and return data frame
doc = xmlTreeParse(url, useInternalNodes = TRUE)
nodes = getNodeSet(doc, "//item")
mydf = ldply(nodes, as.data.frame(xmlToList))
# clean up names of data frame
names(mydf) = str_replace_all(names(mydf), "value\\.", "")
# convert pubDate to date-time object and convert time zone
pubDate = strptime(mydf$pubDate,
format = '%a, %d %b %Y %H:%M:%S', tz = 'GMT')
pubDate = with_tz(pubDate, tz = 'America/New_york')
mydf$pubDate = NULL
#Parse the description field
mydf$description <- as.character(mydf$description)
parseDescription <- function(x) {
out <- html2text(x)$text
out <- strsplit(out,'\n|--')[[1]]
#Find Lead
TextLength <- sapply(out,nchar)
Lead <- out[TextLength==max(TextLength)]
#Find Site
Site <- out[3]
#Return cleaned fields
out <- c(Site,Lead)
names(out) <- c('Site','Lead')
out
}
description <- lapply(mydf$description,parseDescription)
description <- do.call(rbind,description)
mydf <- cbind(mydf,description)
#Format as XTS object
mydf = xts(mydf,order.by=pubDate)
# drop Extra attributes that we don't use yet
mydf$guid.text = mydf$guid..attrs = mydf$description = mydf$link = NULL
return(mydf)
}
答案 0 :(得分:5)
以下是getNews
函数
getNews2 <- function(symbol, number){
# load libraries
require(XML); require(plyr); require(stringr); require(lubridate);
# construct url to news feed rss and encode it correctly
url.b1 = 'http://www.google.com/finance/company_news?q='
url = paste(url.b1, symbol, '&output=rss', "&start=", 1,
"&num=", number, sep = '')
url = URLencode(url)
# parse xml tree, get item nodes, extract data and return data frame
doc = xmlTreeParse(url, useInternalNodes = T);
nodes = getNodeSet(doc, "//item");
mydf = ldply(nodes, as.data.frame(xmlToList))
# clean up names of data frame
names(mydf) = str_replace_all(names(mydf), "value\\.", "")
# convert pubDate to date-time object and convert time zone
mydf$pubDate = strptime(mydf$pubDate,
format = '%a, %d %b %Y %H:%M:%S', tz = 'GMT')
mydf$pubDate = with_tz(mydf$pubDate, tz = 'America/New_york')
# drop guid.text and guid..attrs
mydf$guid.text = mydf$guid..attrs = NULL
return(mydf)
}
此外,您的代码中可能存在错误,因为我尝试将其用于symbol = 'WMT'
并返回错误。我认为getNews2
也适用于WMT。看看它,让我知道它是否适合你。
PS。 description
列仍包含html代码。但是从中提取文本应该很容易。我找时间时会发布更新