R:使用data.table进行Interpolate + Extrapolate over border?

时间:2017-09-05 13:45:51

标签: r data.table interpolation extrapolation

所以,我有以下问题: 我有一个数据集,A(data.table对象),具有以下结构:

date days rate 1996-01-02 9 5.763067 1996-01-02 15 5.745902 1996-01-02 50 5.673317 1996-01-02 78 5.608884 1996-01-02 169 5.473762 1996-01-03 9 5.763067 1996-01-03 14 5.747397 1996-01-03 49 5.672263 1996-01-03 77 5.603705 1996-01-03 168 5.470584 1996-01-04 11 5.729460 1996-01-04 13 5.726104 1996-01-04 48 5.664931 1996-01-04 76 5.601891 1996-01-04 167 5.468961

请注意, days 列及其大小可能因每天而异。 我现在的目标是(分段线性地)沿天插入速率。我通过

每天
approx(x=A[,days],y=A[,rate],xout=days_vec,rule=2)

其中days_vec <- min_days:max_days,即我感兴趣的日期范围(例如1:100)。

我有两个问题:

  1. approx 仅进行插值,即它不会在min(x)和max(x)之间创建线性拟合。如果我现在对第1:100天感兴趣,我首先需要在第9天和第15天(A的前2行)通过以下方式手动完成:

    first_days <- 1:(A[1,days]-1) #1:8 rate_vec[first_days] <- A[1,rate] + (first_days - A[1,days])/(A[2,days]-A[1,days])*(A[2,rate]-A[1,rate])

  2. 然后使用上面 about rate_vec[9:100]。有没有办法一步到位?

    1. 现在,鉴于我需要两个步骤并且两个过程之间的转换点(此处为9)在日期之间有所不同,我无法通过data.table看到实现,尽管这将是非常优先的(使用data.table方法)插值/外推,然后返回扩展的data.table对象)。因此,我目前在日期中运行for循环,这当然要慢得多。
    2. 问题:上面的问题是否可以更好地实现,而且,这在某种程度上是可以使用data.table方法而不是通过A循环吗?

1 个答案:

答案 0 :(得分:4)

这样的事情怎么样。

# please try to make a fully reproducible example!
library(data.table)
df <- fread(input=
"date       days     rate
1996-01-02    9 5.763067
1996-01-02   15 5.745902
1996-01-02   50 5.673317
1996-01-02   78 5.608884
1996-01-02  169 5.473762
1996-01-03    9 5.763067
1996-01-03   14 5.747397
1996-01-03   49 5.672263
1996-01-03   77 5.603705
1996-01-03  168 5.470584
1996-01-04   11 5.729460
1996-01-04   13 5.726104
1996-01-04   48 5.664931
1996-01-04   76 5.601891
1996-01-04  167 5.468961")
df[,date := as.Date(date)]

1。在1:100范围内创建不在数据集

中的天数的NA值
df <- 
  merge(df,
        expand.grid( days=1L:100L,                   # whatever range you are interested in
                     date=df[,sort(unique(date))] ), # dates with at least one observation
        all=TRUE # "outer join" on all common columns (date, days)
        )

2。对于每个日期值,使用线性模型来预测速率的NA值。

df[, rate := ifelse(is.na(rate),
                    predict(lm(rate~days,.SD),.SD), # impute NA w/ lm using available data
                    rate),                          # if not NA, don't impute
   keyby=date]

给你:

head(df,10)
#           date days     rate
#  1: 1996-01-02    1 5.766787 <- rates for days 1-8 & 10 are imputed 
#  2: 1996-01-02    2 5.764987
#  3: 1996-01-02    3 5.763186
#  4: 1996-01-02    4 5.761385
#  5: 1996-01-02    5 5.759585
#  6: 1996-01-02    6 5.757784
#  7: 1996-01-02    7 5.755983
#  8: 1996-01-02    8 5.754183
#  9: 1996-01-02    9 5.763067 <- this rate was given
# 10: 1996-01-02   10 5.750581

如果date的值没有至少两次 rate的观察值,您可能会收到错误,因为您没有足够的积分来容纳一条线。

替代方案:用于分段线性插值的滚动连接解决方​​案

这需要左右滚动连接,以及忽略NA值的平均值。

然而,这并不适用于外推,因为它只是观察指数之外的常数(第一个或最后一个障碍物)。

setkey(df, date, days)

df2 <- data.table( # this is your framework of date/days pairs you want to evaluate
        expand.grid( date=df[,sort(unique(date))],
                     days=1L:100L),
        key = c('date','days')
)

# average of non-NA values between two vectors
meanIfNotNA <- function(x,y){
  (ifelse(is.na(x),0,x) + ifelse(is.na(y),0,y)) /
    ( as.numeric(!is.na(x)) + as.numeric(!is.na(y)))
}

df3 <- # this is your evaluations for the date/days pairs in df2.
  setnames(
    df[setnames( df[df2, roll=+Inf], # rolling join Last Obs Carried Fwd (LOCF)
                 old = 'rate',
                 new = 'rate_locf' 
               ),
     roll=-Inf], # rolling join Next Obs Carried Backwd (NOCB)
    old = 'rate',
    new = 'rate_nocb'
  )[, rate := meanIfNotNA(rate_locf,rate_nocb)]
# once you're satisfied that this works, you can include rate_locf := NULL, etc.

head(df3,10)
#          date days rate_nocb rate_locf     rate
#  1: 1996-01-02    1  5.763067        NA 5.763067
#  2: 1996-01-02    2  5.763067        NA 5.763067
#  3: 1996-01-02    3  5.763067        NA 5.763067
#  4: 1996-01-02    4  5.763067        NA 5.763067
#  5: 1996-01-02    5  5.763067        NA 5.763067
#  6: 1996-01-02    6  5.763067        NA 5.763067
#  7: 1996-01-02    7  5.763067        NA 5.763067
#  8: 1996-01-02    8  5.763067        NA 5.763067
#  9: 1996-01-02    9  5.763067  5.763067 5.763067  <- this rate was given
# 10: 1996-01-02   10  5.745902  5.763067 5.754485