我正在读取具有两列id和jsonString的Hive表。我可以轻松地将jsonString转换为调用spark.read.json函数的Spark数据结构,但是我还必须添加列ID。
val jsonStr1 = """{"fruits":[{"fruit":"banana"},{"fruid":"apple"},{"fruit":"pera"}],"bar":{"foo":"[\"daniel\",\"pedro\",\"thing\"]"},"daniel":"daniel data random","cars":["montana","bagulho"]}"""
val jsonStr2 = """{"fruits":[{"dt":"banana"},{"fruid":"apple"},{"fruit":"pera"}],"bar":{"foo":"[\"daniel\",\"pedro\",\"thing\"]"},"daniel":"daniel data random","cars":["montana","bagulho"]}"""
val jsonStr3 = """{"fruits":[{"a":"banana"},{"fruid":"apple"},{"fruit":"pera"}],"bar":{"foo":"[\"daniel\",\"pedro\",\"thing\"]"},"daniel":"daniel data random","cars":["montana","bagulho"]}"""
case class Foo(id: Integer, json: String)
val ds = Seq(new Foo(1,jsonStr1), new Foo(2,jsonStr2), new Foo(3,jsonStr3)).toDS
val jsonDF = spark.read.json(ds.select($"json").rdd.map(r => r.getAs[String](0)).toDS)
jsonDF.show()
jsonDF.show
+--------------------+------------------+------------------+--------------------+
| bar| cars| daniel| fruits|
+--------------------+------------------+------------------+--------------------+
|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[,,, banana], [,...|
|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[, banana,,], [,...|
|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[banana,,,], [,,...|
+--------------------+------------------+------------------+--------------------+
我想从Hive表中添加列ID,如下所示:
+--------------------+------------------+------------------+--------------------+---------------
| bar| cars| daniel| fruits| id
+--------------------+------------------+------------------+--------------------+--------------
|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[,,, banana], [,...|1
|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[, banana,,], [,...|2
|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[banana,,,], [,,...|3
+--------------------+------------------+------------------+--------------------+
我将不使用正则表达式
我创建了一个udf,它将这两个字段用作参数,并使用适当的JSON库包含所需的field(id)并返回一个新的JSON字符串,就像一个符咒,但我希望Spark API提供一种更好的方法。我正在使用Apache Spark 2.3.0。
答案 0 :(得分:2)
一种方法是将from_json
应用于具有相应架构的JSON字符串,如下所示:
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._
import spark.implicits._
case class Foo(id: Int, json: String)
val df = Seq(Foo(1, jsonStr1), Foo(2, jsonStr2), Foo(3, jsonStr3)).toDF
val schema = StructType(Seq(
StructField("bar", StructType(Seq(
StructField("foo", StringType, true)
)), true),
StructField("cars", ArrayType(StringType, true), true),
StructField("daniel", StringType, true),
StructField("fruits", ArrayType(StructType(Seq(
StructField("a", StringType, true),
StructField("dt", StringType, true),
StructField("fruid", StringType, true),
StructField("fruit", StringType, true)
)), true), true)
))
df.
withColumn("json_col", from_json($"json", schema)).
select($"id", $"json_col.*").
show
// +---+--------------------+------------------+------------------+--------------------+
// | id| bar| cars| daniel| fruits|
// +---+--------------------+------------------+------------------+--------------------+
// | 1|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[null,null,null,...|
// | 2|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[null,banana,nul...|
// | 3|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[banana,null,nul...|
// +---+--------------------+------------------+------------------+--------------------+
答案 1 :(得分:0)
我之前已经知道from_json函数,但是就我而言,为每个JSON手动推断模式是“不可能的”。我以为Spark会有一个“惯用的”界面。 这是最终解决方案:
ds.select($"id",from_json($"json",jsonDF.schema).alias("_json_path")).select($"_json_path.*",$"id").show
ds.select($"id",from_json($"json",jsonDF.schema).alias("_json_path")).select($"_json_path.*",$"id").show
+--------------------+------------------+------------------+--------------------+---+
| bar| cars| daniel| fruits| id|
+--------------------+------------------+------------------+--------------------+---+
|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[,,, banana], [,...| 1|
|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[, banana,,], [,...| 2|
|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[banana,,,], [,,...| 3|
+--------------------+------------------+------------------+--------------------+---+