如何使用Spark查询json文件的嵌套数组类型?

时间:2018-12-20 07:31:41

标签: apache-spark apache-spark-sql

如何使用Spark数据集使用联接查询嵌套数组类型?

当前,我正在分解数组类型,并在需要删除匹配数据的数据集上进行联接。但是有一种方法可以直接查询它而不会爆炸。

{
  "id": 525,
  "arrayRecords": [
    {
      "field1": 525,
      "field2": 0
    },
    {
      "field1": 537,
      "field2": 1
    }
  ]
}

代码

val df = sqlContext.read.json("jsonfile")
val someDF = Seq(("1"),("525"),("3")).toDF("FIELDIDS")
val withSRCRec =df.select($"*",explode($"arrayRecords")as("exploded_arrayRecords"))
val fieldIdMatchedDF= withSRCRec.as("table1").join(someDF.as("table2"),$"table1.exploded_arrayRecords.field1"===$"table2.FIELDIDS").select($"table1.exploded_arrayRecords.field1")

val finalDf = df.as("table1").join(fieldIdMatchedDF.as("table2"),$"table1.id"===$"table2.id","leftanti")

具有fieldIds的ID记录需要删除

2 个答案:

答案 0 :(得分:0)

您可以改用array_except

  

array_except(col1:列,col2:列):列返回第一个数组中而不是第二个数组中的元素的数组,没有重复。结果中元素的顺序不确定

解决方案可能如下:

val input = spark.read.option("multiLine", true).json("input.json")
scala> input.show(false)
+--------------------+---+
|arrayRecords        |id |
+--------------------+---+
|[[525, 0], [537, 1]]|525|
+--------------------+---+

// Since field1 is of type int, let's convert the ids to ints
// You could do this in Scala directly or in Spark SQL's select
val fieldIds = Seq("1", "525", "3").toDF("FIELDIDS").select($"FIELDIDS" cast "int")

// Collect the ids for array_except
val ids = fieldIds.select(collect_set("FIELDIDS") as "ids")

// The trick is to crossJoin (it is cheap given 1-row ids dataset)
val solution = input
  .crossJoin(ids)
  .select(array_except($"arrayRecords.field1", $"ids") as "unmatched")
scala> solution.show
+---------+
|unmatched|
+---------+
|    [537]|
+---------+

答案 1 :(得分:-1)

您可以根据数据集注册一个临时表并使用SQL查询它。会是这样的:

someDs.registerTempTable("sometable");
sql("SELECT array['field'] FROM sometable");