如何在sparklyr中将数据聚合3分钟的时间戳?

时间:2018-04-23 13:23:19

标签: r apache-spark dplyr sparklyr

我正在使用sparklyr进行快速分析。我在使用时间戳时遇到了一些问题。我有两个不同的数据帧:一个是1分钟间隔的行,另一个是3分钟的间隔。

第一个数据集:( 1分钟间隔)

id  timefrom    timeto  value
10  "2017-06-06 10:30:00"   "2017-06-06 10:31:00"   50
10  "2017-06-06 10:31:00"   "2017-06-06 10:32:00"   80
10  "2017-06-06 10:32:00"   "2017-06-06 10:33:00"   20
22  "2017-06-06 10:33:00"   "2017-06-06 10:34:00"   30
22  "2017-06-06 10:34:00"   "2017-06-06 10:35:00"   50
22  "2017-06-06 10:35:00"   "2017-06-06 10:36:00"   50

第二个数据集:( 3分钟间隔)

id  timefrom    timeto  value
10  "2017-06-06 10:30:00"   "2017-06-06 10:33:00"   30
22  "2017-06-06 10:33:00"   "2017-06-06 10:36:00"   67
32  "2017-06-06 10:36:00"   "2017-06-06 10:39:00"   28
14  "2017-06-06 10:39:00"   "2017-06-06 10:42:00"   30
27  "2017-06-06 10:42:00"   "2017-06-06 10:55:00"   90

要比较这2个数据集的值,我必须先将第3个分钟聚合,然后计算的平均值。此外,我必须从第二个数据集中找到最佳拟合窗口。

结果应如下所示:

id  timefrom    timeto  value1  value2
10  "2017-06-06 10:30:00"   "2017-06-06 10:33:00"   30  50
22  "2017-06-06 10:33:00"   "2017-06-06 10:36:00"   67  43.3

是否可以通过sparklyr实现这一目标?感谢您的帮助!

1 个答案:

答案 0 :(得分:1)

假设您的数据已经被解析:

df1
# # Source:   table<df1> [?? x 4]
# # Database: spark_connection
#      id timefrom            timeto              value
#   <int> <dttm>              <dttm>              <int>
# 1    10 2017-06-06 08:30:00 2017-06-06 08:31:00    50
# 2    10 2017-06-06 08:31:00 2017-06-06 08:32:00    80
# 3    10 2017-06-06 08:32:00 2017-06-06 08:33:00    20
# 4    22 2017-06-06 08:33:00 2017-06-06 08:34:00    30
# 5    22 2017-06-06 08:34:00 2017-06-06 08:35:00    50
# 6    22 2017-06-06 08:35:00 2017-06-06 08:36:00    50

df2
# # Source:   table<df2> [?? x 4]
# # Database: spark_connection
#      id timefrom            timeto              value
#   <int> <dttm>              <dttm>              <int>
# 1    10 2017-06-06 08:30:00 2017-06-06 08:33:00    30
# 2    22 2017-06-06 08:33:00 2017-06-06 08:36:00    67
# 3    32 2017-06-06 08:36:00 2017-06-06 08:39:00    28
# 4    14 2017-06-06 08:39:00 2017-06-06 08:42:00    30
# 5    27 2017-06-06 08:42:00 2017-06-06 08:55:00    90

您可以使用window function

exprs <- list(
  "id", "value as value2",
  # window generates structure struct<start: timestamp, end: timestamp>
  # we use dot syntax to access nested fields
  "window.start as timefrom", "window.end as timeto")

df1_agg <- df1 %>% 
  mutate(window = window(timefrom, "3 minutes")) %>% 
  group_by(id, window) %>% 
  summarise(value = avg(value)) %>%
  # As far as I am aware there is no sparklyr syntax 
  # for accessing struct fields, so we'll use simple SQL expression
  spark_dataframe() %>% 
  invoke("selectExpr", exprs) %>% 
  sdf_register() %>%
  print()

# Source:   table<sparklyr_tmp_472ee8ba244> [?? x 4]
# Database: spark_connection
     id value2 timefrom            timeto             
  <int>  <dbl> <dttm>              <dttm>             
1    22   43.3 2017-06-06 08:33:00 2017-06-06 08:36:00
2    10   50.0 2017-06-06 08:30:00 2017-06-06 08:33:00

然后您可以通过id和时间戳列:

df2 %>% inner_join(df1_agg, by = c("id", "timefrom", "timeto"))
# # Source:   lazy query [?? x 5]
# # Database: spark_connection
#      id timefrom            timeto              value value2
#   <int> <dttm>              <dttm>              <int>  <dbl>
# 1    10 2017-06-06 08:30:00 2017-06-06 08:33:00    30   50.0
# 2    22 2017-06-06 08:33:00 2017-06-06 08:36:00    67   43.3