R:使用data.table:=操作来计算新列

时间:2012-05-22 13:43:07

标签: r data.table

我们来看看以下数据:

dt <- data.table(TICKER=c(rep("ABC",10),"DEF"),
        PERIOD=c(rep(as.Date("2010-12-31"),10),as.Date("2011-12-31")),
        DATE=as.Date(c("2010-01-05","2010-01-07","2010-01-08","2010-01-09","2010-01-10","2010-01-11","2010-01-13","2010-04-01","2010-04-02","2010-08-03","2011-02-05")),
        ID=c(1,2,1,3,1,2,1,1,2,2,1),VALUE=c(1.5,1.3,1.4,1.6,1.4,1.2,1.5,1.7,1.8,1.7,2.3))
setkey(dt,TICKER,PERIOD,ID,DATE)

现在,对于每个股票代码/期间组合,我需要在新列中添加以下内容:

  • PRIORAVG:每个ID的最新VALUE的平均值,不包括当前ID,但不超过180天。
  • PREV:来自同一ID的前一个值。

结果应如下所示:

      TICKER     PERIOD       DATE ID VALUE PRIORAVG PREV
 [1,]    ABC 2010-12-31 2010-01-05  1   1.5       NA   NA
 [2,]    ABC 2010-12-31 2010-01-08  1   1.4     1.30  1.5
 [3,]    ABC 2010-12-31 2010-01-10  1   1.4     1.45  1.4
 [4,]    ABC 2010-12-31 2010-01-13  1   1.5     1.40  1.4
 [5,]    ABC 2010-12-31 2010-04-01  1   1.7     1.40  1.5
 [6,]    ABC 2010-12-31 2010-01-07  2   1.3     1.50   NA
 [7,]    ABC 2010-12-31 2010-01-11  2   1.2     1.50  1.3
 [8,]    ABC 2010-12-31 2010-04-02  2   1.8     1.65  1.2
 [9,]    ABC 2010-12-31 2010-08-03  2   1.7     1.70  1.8
[10,]    ABC 2010-12-31 2010-01-09  3   1.6     1.35   NA
[11,]    DEF 2011-12-31 2011-02-05  1   2.3       NA   NA

注意第9行的PRIORAVG等于1.7(等于第5行的VALUE,这是过去180天中另一个ID的唯一先前观察)

我发现了data.table包,但我似乎无法完全理解:=函数。当我保持简单,它似乎工作。要获取每个ID的先前值(我基于this question的解决方案):

dt[,PREV:=dt[J(TICKER,PERIOD,ID,DATE-1),roll=TRUE,mult="last"][,VALUE]]

这很好用,在我的数据集上执行此操作只需0.13秒,行数约为250k;我的矢量扫描功能获得了相同的结果,但速度慢了大约30,000倍。

好的,所以我有了第一个要求。让我们来看看第二个更复杂的要求。现在,对我来说,禁食方法是使用几个向量扫描并通过plyr函数adply抛出函数来获取每行的结果。

calc <- function(df,ticker,period,id,date) {
  df <- df[df$TICKER == ticker & df$PERIOD == period 
        & df$ID != id & df$DATE < date & df$DATE > date-180, ]
  df <- df[order(df$DATE),]
  mean(df[!duplicated(df$ID, fromLast = TRUE),"VALUE"])
}

df <- data.frame(dt)
adply(df,1,function(x) calc(df,x$TICKER,x$PERIOD,x$ID,x$DATE))

我为data.frame编写了函数,它似乎不适用于data.table。对于5000行的子集,这需要大约44秒,但我的数据包括&gt; 100万行。我想知道通过:=

的使用是否可以提高效率
dt[J("ABC"),last(VALUE),by=ID][,mean(V1)]

这适用于为ABC的每个ID选择最新VALUE的平均值。

dt[,PRIORAVG:=dt[J(TICKER,PERIOD),last(VALUE),by=ID][,mean(V1)]]

然而,这并不像预期的那样有效,因为它取所有股票代码/期间的所有最后一个VALUE的平均值,而不仅仅是当前的股票代码/期间。因此,最终所有行都获得相同的平均值。我做错了什么或者这是:=的限制吗?

2 个答案:

答案 0 :(得分:12)

好问题。试试这个:

dt
     TICKER     PERIOD       DATE ID VALUE
[1,]    ABC 2010-12-31 2010-01-05  1   1.5
[2,]    ABC 2010-12-31 2010-01-08  1   1.4
[3,]    ABC 2010-12-31 2010-01-10  1   1.4
[4,]    ABC 2010-12-31 2010-01-13  1   1.5
[5,]    ABC 2010-12-31 2010-01-07  2   1.3
[6,]    ABC 2010-12-31 2010-01-11  2   1.2
[7,]    ABC 2010-12-31 2010-01-09  3   1.6
[8,]    DEF 2011-12-31 2011-02-05  1   2.3

ids = unique(dt$ID)
dt[,PRIORAVG:=NA_real_]
for (i in 1:nrow(dt))
    dt[i,PRIORAVG:=dt[J(TICKER[i],PERIOD[i],setdiff(ids,ID[i]),DATE[i]),
                      mean(VALUE,na.rm=TRUE),roll=TRUE,mult="last"]]
dt
     TICKER     PERIOD       DATE ID VALUE PRIORAVG
[1,]    ABC 2010-12-31 2010-01-05  1   1.5       NA
[2,]    ABC 2010-12-31 2010-01-08  1   1.4     1.30
[3,]    ABC 2010-12-31 2010-01-10  1   1.4     1.45
[4,]    ABC 2010-12-31 2010-01-13  1   1.5     1.40
[5,]    ABC 2010-12-31 2010-01-07  2   1.3     1.50
[6,]    ABC 2010-12-31 2010-01-11  2   1.2     1.50
[7,]    ABC 2010-12-31 2010-01-09  3   1.6     1.35
[8,]    DEF 2011-12-31 2011-02-05  1   2.3       NA

然后你已经有了一点点简化......

dt[,PREV:=dt[J(TICKER,PERIOD,ID,DATE-1),VALUE,roll=TRUE,mult="last"]]

     TICKER     PERIOD       DATE ID VALUE PRIORAVG PREV
[1,]    ABC 2010-12-31 2010-01-05  1   1.5       NA   NA
[2,]    ABC 2010-12-31 2010-01-08  1   1.4     1.30  1.5
[3,]    ABC 2010-12-31 2010-01-10  1   1.4     1.45  1.4
[4,]    ABC 2010-12-31 2010-01-13  1   1.5     1.40  1.4
[5,]    ABC 2010-12-31 2010-01-07  2   1.3     1.50   NA
[6,]    ABC 2010-12-31 2010-01-11  2   1.2     1.50  1.3
[7,]    ABC 2010-12-31 2010-01-09  3   1.6     1.35   NA
[8,]    DEF 2011-12-31 2011-02-05  1   2.3       NA   NA

如果这可以作为原型,那么大幅提升就是保持循环,但使用set()代替:=,以减少开销:

for (i in 1:nrow(dt))
    set(dt,i,6L,dt[J(TICKER[i],PERIOD[i],setdiff(ids,ID[i]),DATE[i]),
                   mean(VALUE,na.rm=TRUE),roll=TRUE,mult="last"])
dt
     TICKER     PERIOD       DATE ID VALUE PRIORAVG PREV
[1,]    ABC 2010-12-31 2010-01-05  1   1.5       NA   NA
[2,]    ABC 2010-12-31 2010-01-08  1   1.4     1.30  1.5
[3,]    ABC 2010-12-31 2010-01-10  1   1.4     1.45  1.4
[4,]    ABC 2010-12-31 2010-01-13  1   1.5     1.40  1.4
[5,]    ABC 2010-12-31 2010-01-07  2   1.3     1.50   NA
[6,]    ABC 2010-12-31 2010-01-11  2   1.2     1.50  1.3
[7,]    ABC 2010-12-31 2010-01-09  3   1.6     1.35   NA
[8,]    DEF 2011-12-31 2011-02-05  1   2.3       NA   NA

这应该比问题中显示的重复矢量扫描快得多。

或者,操作可以进行矢量化。但由于这项任务的特点,写入和读取都不那么容易。

顺便说一下,问题中没有任何数据可以测试180天的要求。如果您添加一些并再次显示预期输出,那么我将使用我在评论中提到的连接继承范围添加年龄计算。

答案 1 :(得分:0)

使用更高版本的data.table的另一种可能的方法:

library(data.table) #data.table_1.12.6 as of Nov 20, 2019
cols <- copy(names(DT))
DT[, c("MIN_DATE", "MAX_DATE") := .(DATE - 180L, DATE)]

DT[, PRIORAVG := 
        .SD[.SD, on=.(TICKER, PERIOD, DATE>=MIN_DATE, DATE<=MAX_DATE),
            by=.EACHI, {
                subdat <- .SD[x.ID!=i.ID]
                pavg <- if (subdat[, .N > 0L])
                    mean(subdat[, last(VALUE), ID]$V1, na.rm=TRUE)
                else 
                    NA_real_
                c(setNames(mget(paste0("i.", cols)), cols), .(PRIORAVG=pavg))
            }]$PRIORAVG
]

DT[, PREV := shift(VALUE), .(TICKER, PERIOD, ID)]

输出:

    TICKER     PERIOD       DATE ID VALUE   MIN_DATE   MAX_DATE PRIORAVG PREV
 1:    ABC 2010-12-31 2010-01-05  1   1.5 2009-07-09 2010-01-05       NA   NA
 2:    ABC 2010-12-31 2010-01-08  1   1.4 2009-07-12 2010-01-08     1.30  1.5
 3:    ABC 2010-12-31 2010-01-10  1   1.4 2009-07-14 2010-01-10     1.45  1.4
 4:    ABC 2010-12-31 2010-01-13  1   1.5 2009-07-17 2010-01-13     1.40  1.4
 5:    ABC 2010-12-31 2010-04-01  1   1.7 2009-10-03 2010-04-01     1.40  1.5
 6:    ABC 2010-12-31 2010-01-07  2   1.3 2009-07-11 2010-01-07     1.50   NA
 7:    ABC 2010-12-31 2010-01-11  2   1.2 2009-07-15 2010-01-11     1.50  1.3
 8:    ABC 2010-12-31 2010-04-02  2   1.8 2009-10-04 2010-04-02     1.65  1.2
 9:    ABC 2010-12-31 2010-08-03  2   1.7 2010-02-04 2010-08-03     1.70  1.8
10:    ABC 2010-12-31 2010-01-09  3   1.6 2009-07-13 2010-01-09     1.35   NA
11:    DEF 2011-12-31 2011-02-05  1   2.3 2010-08-09 2011-02-05       NA   NA