问题Including null values in an Apache Spark Join对于Scala,PySpark和SparkR有答案,但对于sparklyr没有答案。我一直无法弄清楚如何在sparklyr中使inner_join
相等地对待联接列中的空值。有谁知道如何在sparklyr中完成此操作?
答案 0 :(得分:1)
您可以调用隐式交叉联接:
#' Return a Cartesian product of Spark tables
#'
#' @param df1 tbl_spark
#' @param df2 tbl_spark
#' @param explicit logical If TRUE use crossJoin otherwise
#' join without expression
#' @param suffix character suffixes to be used on duplicate names
cross_join <- function(df1, df2,
explicit = FALSE, suffix = c("_x", "_y")) {
common_cols <- intersect(colnames(df1), colnames(df2))
if(length(common_cols) > 0) {
df1 <- df1 %>% rename_at(common_cols, funs(paste0(., suffix[1])))
df2 <- df2 %>% rename_at(common_cols, funs(paste0(., suffix[2])))
}
sparklyr::invoke(
spark_dataframe(df1),
if(explicit) "crossJoin" else "join",
spark_dataframe(df2)) %>% sdf_register()
}
并使用IS NOT DISTINCT FROM
# Enable Cross joins
sc %>%
spark_session() %>%
sparklyr::invoke("conf") %>%
sparklyr::invoke("set", "spark.sql.crossJoin.enabled", "true")
df1 <- copy_to(sc, tibble(id1 = c(NA, "foo", "bar"), val = 1:3))
df2 <- copy_to(sc, tibble(id2 = c(NA, "foo", "baz"), val = 4:6))
df1 %>%
cross_join(df2) %>%
filter(id1 %IS NOT DISTINCT FROM% id2)
# Source: spark<?> [?? x 4]
id1 val_x id2 val_y
* <chr> <int> <chr> <int>
1 NA 1 NA 4
2 foo 2 foo 5
<jobj[62]>
org.apache.spark.sql.catalyst.plans.logical.Join
Join Inner, (id1#10 <=> id2#76)
:- Project [id1#10, val#11 AS val_x#129]
: +- InMemoryRelation [id1#10, val#11], StorageLevel(disk, memory, deserialized, 1 replicas)
: +- Scan ExistingRDD[id1#10,val#11]
+- Project [id2#76, val#77 AS val_y#132]
+- InMemoryRelation [id2#76, val#77], StorageLevel(disk, memory, deserialized, 1 replicas)
+- Scan ExistingRDD[id2#76,val#77]
<=>
运算符应以相同的方式工作:
df1 %>%
cross_join(df2) %>%
filter(id1 %<=>% id2)
请注意:
可以使用dplyr
样式的交叉连接:
mutate(df1, `_const` = TRUE) %>%
inner_join(mutate(df2, `_const` = TRUE), by = c("_const")) %>%
select(-`_const`) %>%
filter(id1 %IS NOT DISTINCT FROM% id2)
但是我不建议这样做,因为它不那么健壮(取决于上下文优化器可能无法识别_const
是常量)。