按最大和最小日期合并行

时间:2017-02-07 02:39:48

标签: r dplyr

我有一个看起来像这样的数据集。

id1 = c(1,1,1,1,1,1,1,1,2,2)
id2 = c(3,3,3,3,3,3,3,3,3,3)
lat = c(-62.81559,-62.82330, -62.78693,-62.70136, -62.76476,-62.48157,-62.49064,-62.45838,42.06258,42.06310)
lon = c(-61.15518, -61.14885,-61.17801,-61.00363, -59.14270, -59.22009, -59.32967, -59.04125 ,154.70579, 154.70625)
start_date= as.POSIXct(c('2016-03-24 15:30:00', '2016-03-24 15:30:00','2016-03-24 23:40:00','2016-03-25 12:50:00','2016-03-29 18:20:00','2016-06-01 02:40:00','2016-06-01 08:00:00','2016-06-01 16:30:00','2016-07-29 20:20:00','2016-07-29 20:20:00'), tz = 'UTC')
end_date = as.POSIXct(c('2016-03-24 23:40:00', '2016-03-24 18:50:00','2016-03-25 03:00:00','2016-03-25 19:20:00','2016-04-01 03:30:00','2016-06-02 01:40:00','2016-06-01 14:50:00','2016-06-02 01:40:00','2016-07-30 07:00:00','2016-07-30 07:00:00'),tz = 'UTC')
speed = c(2.9299398, 2.9437502, 0.0220565, 0.0798409, 1.2824859, 1.8685429, 3.7927680, 1.8549291, 0.8140249,0.8287073)
df = data.frame(id1, id2, lat, lon, start_date, end_date, speed)

id1 id2       lat       lon          start_date            end_date     speed
1    1   3 -62.81559 -61.15518 2016-03-24 15:30:00 2016-03-24 23:40:00 2.9299398
2    1   3 -62.82330 -61.14885 2016-03-24 15:30:00 2016-03-24 18:50:00 2.9437502
3    1   3 -62.78693 -61.17801 2016-03-24 23:40:00 2016-03-25 03:00:00 0.0220565
4    1   3 -62.70136 -61.00363 2016-03-25 12:50:00 2016-03-25 19:20:00 0.0798409
5    1   3 -62.76476 -59.14270 2016-03-29 18:20:00 2016-04-01 03:30:00 1.2824859
6    1   3 -62.48157 -59.22009 2016-06-01 02:40:00 2016-06-02 01:40:00 1.8685429
7    1   3 -62.49064 -59.32967 2016-06-01 08:00:00 2016-06-01 14:50:00 3.7927680
8    1   3 -62.45838 -59.04125 2016-06-01 16:30:00 2016-06-02 01:40:00 1.8549291
9    2   3  42.06258 154.70579 2016-07-29 20:20:00 2016-07-30 07:00:00 0.8140249
10   2   3  42.06310 154.70625 2016-07-29 20:20:00 2016-07-30 07:00:00 0.8287073

实际数据集更大。我想要做的是根据日期范围合并此数据集,并按id1和id2分组,这样,如果一行上的日期/时间范围在下一个日期/时间范围的12小时内'ABS(end_date [1] - start_date [2])< 12小时的行应该合并,新的start_date是最早的日期,end_date是最新的。所有其他值(lat,lon,speed)将被平均。这是某种意义上的“重复数据删除”工作,因为12小时内的行实际上代表相同的“事件”。对于上面的例子,最终结果将是

id1 id2       lat       lon          start_date            end_date     speed
1    1   3 -62.7818  -61.12142 2016-03-24 15:30:00 2016-03-25 19:20:00 1.493897
2    1   3 -62.76476 -59.14270 2016-03-29 18:20:00 2016-04-01 03:30:00 1.2824859
3    1   3 -62.47686 -59.197   2016-06-01 02:40:00 2016-06-02 01:40:00 2.505413
4    2   3  42.06284 154.706   2016-07-29 20:20:00 2016-07-30 07:00:00 0.8213661

将前四行合并(放入row1),单独留下5行(row2),合并6-8行(row3),合并9-10行(row4)。

我一直在尝试使用dplyr group_bysummarize执行此操作,但我似乎无法正确获取日期范围。

希望有人可以确定解决问题的简单方法。如果您知道如何在SQL中执行此操作,请加分;-)所以我可以在将其拉入R之前进行重复数据删除。

1 个答案:

答案 0 :(得分:0)

这是第一个非常天真的实现。警告:它很慢,不漂亮,但仍然缺少输出中的开始和结束日期!请注意,它希望按日期和时间排序行。如果数据集中不是这种情况,则可以先在R或SQL中执行此操作。很抱歉,我无法想到dplyr或SQL解决方案。如果有人有想法,我也希望看到这两个。

  dedupe <- function(df) {
  counter = 1
  temp_vector = unlist(df[1, ])
  summarized_df = df[0, c(1, 2, 3, 4, 7)]
  colnames(summarized_df) = colnames(df)[c(1, 2, 3, 4, 7)]
  summarized_df$counter = NULL
  for (i in 2:nrow(df)) {
    if (((abs(difftime(df[i, "start_date"], df[i - 1, "end_date"], units = "h")) <
          12) ||
         abs(difftime(df[i, "start_date"], df[i - 1, "start_date"], units = "h")) <
         12) &&
        df[i, "id1"] == df[i - 1, "id1"] &&
        df[i, "id2"] == df[i - 1, "id2"]) {
      #group events because id is the same and time range overlap
      #sum up columns and select maximum end_date
      temp_vector[c(3, 4, 7)] = temp_vector[c(3, 4, 7)] + unlist(df[i, c(3, 4, 7)])
      temp_vector["end_date"] = max(temp_vector["end_date"], df[i, "end_date"])
      counter = counter + 1
      if (i == nrow(df)) {
        #in the last iteration we need to create a new group
        summarized_df[nrow(summarized_df) + 1, c(1, 2)] = df[i, c(1, 2)]
        summarized_df[nrow(summarized_df), 3:5] = temp_vector[c(3, 4, 7)] / counter
        summarized_df[nrow(summarized_df), "counter"] = counter
      }
    } else {
      #new event so we calculate group statistics for temp_vector and reset its value as well as counter
      summarized_df[nrow(summarized_df) + 1, c(1, 2)] = df[i, c(1, 2)]
      summarized_df[nrow(summarized_df), 3:5] = temp_vector[c(3, 4, 7)] / counter
      summarized_df[nrow(summarized_df), "counter"] = counter
      counter = 1
      temp_vector[c(3, 4, 7)] = unlist(df[i, c(3, 4, 7)])
    }
  }
  return(summarized_df)
}

函数调用

> dedupe(df)
   id1 id2       lat       lon     speed counter
5    1   3 -62.78179 -61.12142 1.4938968       4
6    1   3 -62.76476 -59.14270 1.2824859       1
9    2   3 -62.47686 -59.19700 2.5054133       3
10   2   3  42.06284 154.70602 0.8213661       2