我有像这样的维基百科页面浏览量的数据
library(wikipediatrend)
views <-wp_trend(page = "European debt crisis",from = "2010-01-01",to = "2014-12-31",lang = "en",friendly = TRUE,requestFrom = "wp.trend.tester at wptt.wptt",userAgent = TRUE)
date count
2010-01-01 128
2010-01-02 142
我有S&amp; P500的数据
library(quantmod)
startDate = as.Date("2010-01-01")
endDate = as.Date("2014-12-31")
getSymbols("^GSPC", src = "yahoo", from = startDate, to = endDate)
Date Open High Low Close Volume
2010-01-04 1116.56 1133.87 1116.56 1132.99 3991400000
2010-01-05 1132.66 1136.63 1129.66 1136.52 2491020000
现在我想在交易发生时仅提取维基百科页面的那些日子,即不包括周末和假日以及飓风桑迪等不自然关机的日子等。提取这些值的最简单方法是什么
答案 0 :(得分:0)
这是一个直接的子集化(或过滤)问题:
# get the wikipedia views data
library(wikipediatrend)
views <-wp_trend(page = "European debt crisis",from = "2010-01-01",to = "2014-12-31",lang = "en",friendly = TRUE,requestFrom = "wp.trend.tester at wptt.wptt",userAgent = TRUE)
# get the stock trading data
library(quantmod)
startDate = as.Date("2010-01-01")
endDate = as.Date("2014-12-31")
getSymbols("^GSPC", src = "yahoo", from = startDate, to = endDate)
以下是如何使用基础R中的[
进行子集:
# where are the trading dates in the stock data?
index(GSPC)
# where are the dates in the wikipedia data?
views$date
因此,我们希望按views
列对date
数据框进行子集,以便它只包含index(GSPS)
中的值:
# subset wikipedia data by stock data
# pattern is:
# table_to_subset[rule_to_subset_rows, rule_to_subset_columns]
# so to subset the wikipedia view data by the dates of the stock trading
# data we can do this:
wiki_data <- views[views$date %in% index(GSPC), ]
您还可以使用提供的包data.table
或dplyr
来执行此操作,如果您的数据非常大,这可能会更快。
答案 1 :(得分:0)
是的,稍后了解它并解决了它
gspcdf<-data.frame(date = index(GSPC), GSPC, row.names=NULL)
CombDF<-merge(views,gspcdf, by.x='date', by.y='date')