我有以下代码:
val ori0 = Seq(
(0l, "1")
).toDF("id", "col1")
val date0 = Seq(
(0l, "1")
).toDF("id", "date")
val joinExpression = $"col1" === $"date"
ori0.join(date0, joinExpression).show()
val ori = spark.range(1).withColumn("col1", lit("1"))
val date = spark.range(1).withColumn("date", lit("1"))
ori.join(date,joinExpression).show()
第一个连接有效,但是第二个连接有错误:
Exception in thread "main" org.apache.spark.sql.AnalysisException: Detected implicit cartesian product for INNER join between logical plans
Range (0, 1, step=1, splits=Some(4))
and
Project [_1#11L AS id#14L, _2#12 AS date#15]
+- Filter (isnotnull(_2#12) && (1 = _2#12))
+- LocalRelation [_1#11L, _2#12]
Join condition is missing or trivial.
我看过很多次了,不知道为什么是交叉连接,它们之间有什么区别?
答案 0 :(得分:1)
如果要扩展第二个联接,您会发现它实际上等同于:
SELECT *
FROM ori JOIN date
WHERE 1 = 1
很明显WHERE 1 = 1
的加入条件不重要,这是Spark检测到笛卡尔的条件之一。
在第一种情况下并非如此,因为优化器此时无法推断联接列仅包含单个值,并且将尝试应用哈希或排序合并联接。