子集观察至少相差30分钟

时间:2016-04-28 14:48:30

标签: r data.table posixct

我有data.table(约3000万行),包含datetime格式的POSIXct列,id列和其他几列(在示例中) ,我刚刚离开了一个不相关的列x来证明存在其他需要保留的列。 dput位于帖子的底部。

head(DT)
#              datetime          x id
#1: 2016-04-28 16:20:18 0.02461368  1
#2: 2016-04-28 16:41:34 0.88953932  1
#3: 2016-04-28 16:46:07 0.31818101  1
#4: 2016-04-28 17:00:56 0.14711365  1
#5: 2016-04-28 17:09:11 0.54406602  1
#6: 2016-04-28 17:39:09 0.69280341  1

问:对于每个id,我需要仅对那些相差超过30分钟时间的观察进行子集化。什么是有效的data.table方法来做到这一点(如果可能的话,没有广泛的循环)?

逻辑也可以描述为(如下面的评论中所示):

  

Per id始终保留第一行。下一行至少30   第一个分钟后也应保留。我们假设该行为   保留是第4行。然后,计算第4行和第4行之间的时差   第5行:n并保持第一行相差超过30分钟左右   上

在下面的dput中,我添加了一个列keep来指示在此示例中应保留哪些行,因为它们与每个id保留的上一个观察值相差超过30分钟。困难在于,似乎有必要迭代地计算时差(或者至少,我现在无法想到更有效的方法)。

library(data.table)
DT <- structure(list(
  datetime = structure(c(1461853218.81561, 1461854494.81561, 
    1461854767.81561, 1461855656.81561, 1461856151.81561, 1461857949.81561, 
    1461858601.81561, 1461858706.81561, 1461859078.81561, 1461859103.81561, 
    1461852799.81561, 1461852824.81561, 1461854204.81561, 1461855331.81561, 
    1461855633.81561, 1461856311.81561, 1461856454.81561, 1461857177.81561, 
    1461858662.81561, 1461858996.81561), class = c("POSIXct", "POSIXt")), 
  x = c(0.0246136845089495, 0.889539316063747, 0.318181007634848, 
  0.147113647311926, 0.544066024711356, 0.6928034061566, 0.994269776623696, 
  0.477795971091837, 0.231625785352662, 0.963024232536554, 0.216407935833558, 
  0.708530468167737, 0.758459537522867, 0.640506813768297, 0.902299045119435, 
  0.28915973729454, 0.795467417687178, 0.690705278422683, 0.59414202044718, 
  0.655705799115822), 
  id = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L), 
  keep = c(TRUE, FALSE, FALSE, TRUE, FALSE, TRUE, FALSE, FALSE, FALSE, FALSE, TRUE, FALSE, 
           FALSE, TRUE, FALSE, FALSE, FALSE, TRUE, FALSE, TRUE)), 
  .Names = c("datetime", "x", "id", "keep"), 
  row.names = c(NA, -20L), 
  class = c("data.table", "data.frame"))

setkey(DT, id, datetime)
DT[, difftime := difftime(datetime, shift(datetime, 1L, NA,type="lag"), units = "mins"),
   by = id]
DT[is.na(difftime), difftime := 0]
DT[, difftime := cumsum(as.numeric(difftime)), by = id]

keep列的说明:

  • 行2:3与第1行相差不到30分钟 - >删除
  • 第4行与第1行相差超过30分钟 - &gt;保持
  • 第5行与第4行相差不到30分钟 - &gt;删除
  • 第6行与第4行相差超过30分钟 - &gt;保持
  • ...

期望的输出:

desiredDT <- DT[(keep)]

感谢我收到的三个专家答案。我在1和1000万行数据上测试了它们。这是基准的摘录。

a)100万行

microbenchmark(frank(DT_Frank), roland(DT_Roland), eddi1(DT_Eddi1), eddi2(DT_Eddi2), 
               times = 3L, unit = "relative")
#Unit: relative
#              expr       min        lq      mean    median        uq      max neval
#   frank(DT_Frank)  1.286647  1.277104  1.185216  1.267769  1.140614 1.036749     3
# roland(DT_Roland)  1.000000  1.000000  1.000000  1.000000  1.000000 1.000000     3
#   eddi1(DT_Eddi1) 11.748622 11.697409 10.941792 11.647320 10.587002 9.720901     3
#   eddi2(DT_Eddi2)  9.966078  9.915651  9.210168  9.866330  8.877769 8.070281     3

b)1000万行

microbenchmark(frank(DT_Frank), roland(DT_Roland), eddi1(DT_Eddi1), eddi2(DT_Eddi2), 
                times = 3L, unit = "relative")
#Unit: relative
#              expr       min        lq      mean    median        uq       max neval
#   frank(DT_Frank)  1.019561  1.025427  1.026681  1.031061  1.030028  1.029037     3
# roland(DT_Roland)  1.000000  1.000000  1.000000  1.000000  1.000000  1.000000     3
#   eddi1(DT_Eddi1) 11.567302 11.443146 11.301487 11.323914 11.176515 11.035143     3
#   eddi2(DT_Eddi2)  9.796800  9.693823  9.526193  9.594931  9.398969  9.211019     3

显然,@ Frank的data.table方法和@Roland基于Rcpp的解决方案在性能上与Rcpp略有优势相似,而@ eddi的方法仍然很快但不像其他方法那样高效。

然而,当我检查解决方案的平等性时,我发现@Roland的方法与其他方法略有不同:

a)100万行

all.equal(frank(DT_Frank), roland(DT_Roland))
#[1] "Component “datetime”: Numeric: lengths (982228, 982224) differ"
#[2] "Component “id”: Numeric: lengths (982228, 982224) differ"      
#[3] "Component “x”: Numeric: lengths (982228, 982224) differ"
all.equal(frank(DT_Frank), eddi1(DT_Eddi1))
#[1] TRUE
all.equal(frank(DT_Frank), eddi2(DT_Eddi2))
#[1] TRUE

b)1000万行

all.equal(frank(DT_Frank), roland(DT_Roland))
#[1] "Component “datetime”: Numeric: lengths (9981898, 9981891) differ"
#[2] "Component “id”: Numeric: lengths (9981898, 9981891) differ"      
#[3] "Component “x”: Numeric: lengths (9981898, 9981891) differ"       
all.equal(frank(DT_Frank), eddi1(DT_Eddi1))
#[1] TRUE
all.equal(frank(DT_Frank), eddi2(DT_Eddi2))
#[1] TRUE

我目前的假设是,这种差异可能与差异是否> 30分钟或者> = 30分钟虽然我还不确定。

最后的想法:我决定选择@ Frank的解决方案有两个原因:1。它表现非常好,几乎等于Rcpp解决方案,而且2.它不需要另外一个我不熟悉的包然而(我正在使用data.table)

3 个答案:

答案 0 :(得分:11)

这就是我要做的事情:

hcitool

给出了

setDT(DT, key=c("id","datetime")) # invalid selfref with the OP's example data

s = 0L
w = DT[, .I[1L], by=id]$V1

while (length(w)){
   s = s + 1L
   DT[w, tag := s]

   m = DT[w, .(id, datetime = datetime+30*60)]
   w = DT[m, which = TRUE, roll=-Inf]
   w = w[!is.na(w)]
}

背后的想法由OP in a comment描述:

  

每个id始终保留第一行。在第一行之后至少30分钟的下一行也应保留。让我们假设要保留的行是第4行。然后,计算第4行和第5行之间的时间差:n并保持第一个相差超过30分钟等等

答案 1 :(得分:9)

使用Rcpp:

library(Rcpp)
library(inline)
cppFunction(
  'LogicalVector selecttimes(const NumericVector x) {
   const int n = x.length();
   LogicalVector res(n);
   res(0) = true;
   double testval = x(0);
   for (int i=1; i<n; i++) {
    if (x(i) - testval > 30 * 60) {
      testval = x(i);
      res(i) = true;
    }
   }
   return res;
  }')

DT[, keep1 := selecttimes(datetime), by = id]

DT[, all(keep == keep1)]
#[1] TRUE

应该进行一些额外的测试,它需要输入验证,时间差可以作为参数。

答案 2 :(得分:7)

# create an index column
DT[, idx := 1:.N, by = id]

# find the indices of the matching future dates
DT[, fut.idx := DT[.(id = id, datetime = datetime+30*60), on = c('id', 'datetime')
                    , idx, roll = -Inf]]
#               datetime          x id  keep         difftime idx  fut.idx
# 1: 2016-04-28 09:20:18 0.02461368  1  TRUE   0.0000000 mins   1        4
# 2: 2016-04-28 09:41:34 0.88953932  1 FALSE  21.2666667 mins   2        6
# 3: 2016-04-28 09:46:07 0.31818101  1 FALSE  25.8166667 mins   3        6
# 4: 2016-04-28 10:00:56 0.14711365  1  TRUE  40.6333333 mins   4        6
# 5: 2016-04-28 10:09:11 0.54406602  1 FALSE  48.8833333 mins   5        7
# 6: 2016-04-28 10:39:09 0.69280341  1  TRUE  78.8500000 mins   6       NA
# 7: 2016-04-28 10:50:01 0.99426978  1 FALSE  89.7166667 mins   7       NA
# 8: 2016-04-28 10:51:46 0.47779597  1 FALSE  91.4666667 mins   8       NA
# 9: 2016-04-28 10:57:58 0.23162579  1 FALSE  97.6666667 mins   9       NA
#10: 2016-04-28 10:58:23 0.96302423  1 FALSE  98.0833333 mins  10       NA
#11: 2016-04-28 09:13:19 0.21640794  2  TRUE   0.0000000 mins   1        4
#12: 2016-04-28 09:13:44 0.70853047  2 FALSE   0.4166667 mins   2        4
#13: 2016-04-28 09:36:44 0.75845954  2 FALSE  23.4166667 mins   3        6
#14: 2016-04-28 09:55:31 0.64050681  2  TRUE  42.2000000 mins   4        8
#15: 2016-04-28 10:00:33 0.90229905  2 FALSE  47.2333333 mins   5        9
#16: 2016-04-28 10:11:51 0.28915974  2 FALSE  58.5333333 mins   6        9
#17: 2016-04-28 10:14:14 0.79546742  2 FALSE  60.9166667 mins   7        9
#18: 2016-04-28 10:26:17 0.69070528  2  TRUE  72.9666667 mins   8       10
#19: 2016-04-28 10:51:02 0.59414202  2 FALSE  97.7166667 mins   9       NA
#20: 2016-04-28 10:56:36 0.65570580  2  TRUE 103.2833333 mins  10       NA


# at this point the problem is "solved", but you still have to extract the solution
# and that's the more complicated part
DT[, keep.new := FALSE]

# iterate over the matching indices (jumping straight to the correct one)
DT[, {
       next.idx = 1

       while(!is.na(next.idx)) {
         set(DT, .I[next.idx], 'keep.new', TRUE)
         next.idx = fut.idx[next.idx]
       }
     }, by = id]

DT[, identical(keep, keep.new)]
#[1] TRUE

或者对于最后一步,你可以做(​​这将迭代整个事情,但我不知道速度的影响是什么):

DT[, keep.3 := FALSE]
DT[DT[, .I[na.omit(Reduce(function(x, y) fut.idx[x], c(1, fut.idx), accumulate = T))]
      , by = id]$V1
   , keep.3 := TRUE]

DT[, identical(keep, keep.3)]
#[1] TRUE