我有一个名为writer
的数组类型列的表,其值为array[value1, value2]
,array[value2, value3]
....等等。
我正在做self join
以获得在数组之间具有共同值的结果。我试过了:
sqlContext.sql("SELECT R2.writer FROM table R1 JOIN table R2 ON R1.id != R2.id WHERE ARRAY_INTERSECTION(R1.writer, R2.writer)[0] is not null ")
和
sqlContext.sql("SELECT R2.writer FROM table R1 JOIN table R2 ON R1.id != R2.id WHERE ARRAY_INTERSECT(R1.writer, R2.writer)[0] is not null ")
但有同样的例外:
线程“main”中的异常org.apache.spark.sql.AnalysisException: 未定义的功能:'ARRAY_INTERSECT'。这个功能既不是 注册临时职能或注册的永久职能 数据库'default'。;第1行pos 80
可能Spark SQL不支持ARRAY_INTERSECTION
和ARRAY_INTERSECT
。如何在Spark SQL
中实现我的目标?
答案 0 :(得分:6)
你需要一个udf:
import org.apache.spark.sql.functions.udf
spark.udf.register("array_intersect",
(xs: Seq[String], ys: Seq[String]) => xs.intersect(ys))
然后检查交叉点是否为空:
scala> spark.sql("SELECT size(array_intersect(array('1', '2'), array('3', '4'))) = 0").show
+-----------------------------------------+
|(size(UDF(array(1, 2), array(3, 4))) = 0)|
+-----------------------------------------+
| true|
+-----------------------------------------+
scala> spark.sql("SELECT size(array_intersect(array('1', '2'), array('1', '4'))) = 0").show
+-----------------------------------------+
|(size(UDF(array(1, 2), array(1, 4))) = 0)|
+-----------------------------------------+
| false|
+-----------------------------------------+
答案 1 :(得分:2)
自从Spark 2.4 array_intersect
函数可以直接在SQL中使用
spark.sql(
"SELECT array_intersect(array(1, 42), array(42, 3)) AS intersection"
).show
+------------+
|intersection|
+------------+
| [42]|
+------------+
和Dataset
API:
import org.apache.spark.sql.functions.array_intersect
Seq((Seq(1, 42), Seq(42, 3)))
.toDF("a", "b")
.select(array_intersect($"a", $"b") as "intersection")
.show
+------------+
|intersection|
+------------+
| [42]|
+------------+
来宾语言中也存在等效功能:
pyspark.sql.functions.array_intersect
在PySpark中。SparkR::array_intersect
在SparkR中。