重建/扩展先前在每个组ID

时间:2017-12-12 10:37:13

标签: r dataframe

这个问题是问你如何“重建”一个先前根据segment_id折叠的数据框。将开始和结束变量包含在一个表中,该表扩展到每个区间内的每个元素。

考虑以下样本数据集:

my_df <- structure(list(group_id = c(1, 2, 2, 2, 3, 
                            3, 3, 4, 4, 5, 6, 6, 6, 
                            7, 7, 7, 8, 9), start = c(1L, 1L, 13L, 24L, 1L, 16L, 30L, 1L, 14L, 1L, 1L, 6L, 11L, 1L, 9L, 20L, 
                                                                                 1L, 1L), end = c(22L, 13L, 24L, 27L, 16L, 30L, 51L, 14L, 
                                                                                                      26L, 8L, 6L, 11L, 17L, 9L, 20L, 26L, 17L, 14L), segment_id = c(1L, 
                                                                                                                                                                     1L, 2L, 3L, 1L, 2L, 3L, 1L, 2L, 1L, 1L, 2L, 3L, 1L, 2L, 3L, 1L, 
                                                                                                                                                                     1L)), row.names = 3377225:3377242, class = "data.frame", .Names = c("group_id", 
                                                                                                                                                                                                                                         "start", "end", "segment_id"))

应用以下预处理至关重要:

my_df [my_df $start > 1, "start"] <- my_df [my_df $start > 1, "start"] +1

正如您在数据中观察到的那样,信息segment_id用于折叠每个data.frame的{​​{1}}和start以及end元素分别保存在变量segmentstart中。

我正在努力寻找能够运行数百万条记录并提供以下结果的有效解决方案:

end

我发现的唯一解决方案是在矩阵中转换data.frame并对所有段执行循环。

澄清一下,请不要犹豫。

2 个答案:

答案 0 :(得分:2)

my_df <- structure(list(group_id = c(1, 2, 2, 2, 3, 3, 3, 4, 4, 5, 6, 6, 6, 7, 7, 7, 8, 9), 
start = c(1L, 1L, 13L, 24L, 1L, 16L, 30L, 1L, 14L, 1L, 1L, 6L, 11L, 1L, 9L, 20L, 1L, 1L), 
end = c(22L, 13L, 24L, 27L, 16L, 30L, 51L, 14L, 26L, 8L, 6L, 11L, 17L, 9L, 20L, 26L, 17L, 14L), 
segment_id = c(1L, 1L, 2L, 3L, 1L, 2L, 3L, 1L, 2L, 1L, 1L, 2L, 3L, 1L, 2L, 3L, 1L, 1L)), 
row.names = 3377225:3377242, class = "data.frame", .Names = c("group_id", "start", "end", "segment_id"))


library(tidyverse)

my_df %>%
  mutate(start = ifelse(start > 1 , start + 1, start)) %>%          # update start values
  group_by(group_id, segment_id) %>%                                # for each group and segment id combination
  nest() %>%                                                        # create a dataset with the rest of the columns
  mutate(element_id_new = map(data, ~ seq(.$start, .$end, 1))) %>%  # get a sequence of values from start to end
  unnest(element_id_new)                                            # unnest the sequence

# # A tibble: 208 x 3
#   group_id segment_id element_id_new
#      <dbl>      <int>          <dbl>
# 1        1          1              1
# 2        1          1              2
# 3        1          1              3
# 4        1          1              4
# 5        1          1              5
# 6        1          1              6
# 7        1          1              7
# 8        1          1              8
# 9        1          1              9
# 10       1          1             10
# # ... with 198 more rows

答案 1 :(得分:1)

使用data.table还有另一种方法:

library(data.table)
setDT(my_df)[start == 1, start := 0][
  , .(group_id = rep(group_id, end - start), segment_id = rep(segment_id, end - start))][
      , element_id := rowid(group_id)][]
     group_id segment_id element_id
  1:        1          1          1
  2:        1          1          2
  3:        1          1          3
  4:        1          1          4
  5:        1          1          5
 ---                               
204:        9          1         10
205:        9          1         11
206:        9          1         12
207:        9          1         13
208:        9          1         14

解释

应用请求的更正 - 但是以OP建议的不同方式 - 仅应用于start == 1的少数条目。这减少了到位的更新次数,即不复制整个对象,我们可以避免在计算每条条纹的长度时添加+ 1

然后,group_idsegment_id会被end - start的请求重复多次。最后,element_id通过使用group_id函数对每个rowid()中的行进行编号来附加。