以下示例描述了如何使用dplyr和sparklyr聚合行而不计算不同值的数量。
有没有解决命令链的工作?
更一般地说,如何在sparklyr数据帧上使用类似窗口函数的sql。
## generating a data set
set.seed(.328)
df <- data.frame(
ids = floor(runif(10, 1, 10)),
cats = sample(letters[1:3], 10, replace = TRUE),
vals = rnorm(10)
)
## copying to Spark
df.spark <- copy_to(sc, df, "df_spark", overwrite = TRUE)
# Source: table<df_spark> [?? x 3]
# Database: spark_connection
# ids cats vals
# <dbl> <chr> <dbl>
# 9 a 0.7635935
# 3 a -0.7990092
# 4 a -1.1476570
# 6 c -0.2894616
# 9 b -0.2992151
# 2 c -0.4115108
# 9 b 0.2522234
# 9 c -0.8919211
# 6 c 0.4356833
# 6 b -1.2375384
# # ... with more rows
# using the regular dataframe
df %>% mutate(n_ids = n_distinct(ids))
# ids cats vals n_ids
# 9 a 0.7635935 5
# 3 a -0.7990092 5
# 4 a -1.1476570 5
# 6 c -0.2894616 5
# 9 b -0.2992151 5
# 2 c -0.4115108 5
# 9 b 0.2522234 5
# 9 c -0.8919211 5
# 6 c 0.4356833 5
# 6 b -1.2375384 5
# using the sparklyr data frame
df.spark %>% mutate(n_ids = n_distinct(ids))
Error: Window function `distinct()` is not supported by this database
答案 0 :(得分:2)
这里最好的方法是使用count
∘distinct
分别计算点数:
n_ids <- df.spark %>%
select(ids) %>% distinct() %>% count() %>% collect() %>%
unlist %>% as.vector
df.spark %>% mutate(n_ids = n_ids)
或approx_count_distinct
:
n_ids_approx <- df.spark %>%
select(ids) %>% summarise(approx_count_distinct(ids)) %>% collect() %>%
unlist %>% as.vector
df.spark %>% mutate(n_ids = n_ids_approx)
它有点冗长,但如果你想使用全局无界框架,dplyr
使用的窗口函数方法无论如何都是死路一条。
如果您想要精确的结果,您还可以:
df.spark %>%
spark_dataframe() %>%
invoke("selectExpr", list("COUNT(DISTINCT ids) as cnt_unique_ids")) %>%
sdf_register()
答案 1 :(得分:0)
我想在this thread中链接,以解答sparklyr的问题。
使用rox_count_distinct我认为是最好的解决方案。以我的经验,使用窗口时dbplyr不会翻译此函数,因此最好自己编写SQL。
mtcars_spk <- copy_to(sc, mtcars,"mtcars_spk",overwrite = TRUE)
mtcars_spk2 <- mtcars_spk %>%
dplyr::mutate(test = paste0(gear, " ",carb)) %>%
dplyr::mutate(discnt = sql("approx_count_distinct(test) OVER (PARTITION BY cyl)"))
This thread更广泛地解决了该问题,并讨论了CountDistinctv.s。 roxCountCounttintinct