我正在Scala中编写ETL Spark(2.4)作业,以S3上的;
分隔的CSV文件(带有全局模式)读取。数据已加载到DataFrame中,并包含一列(假设其命名为custom
)和JSON格式的字符串(多层嵌套)。目标是自动从该列中推断模式,以便可以针对S3中Parquet文件的写接收器进行结构化。
这篇文章(How to query JSON data column using Spark DataFrames?)建议Spark 2.4中的schema_of_json
可以从JSON格式的列或字符串中推断模式。
这是我尝试过的:
val jsonSchema: String = df.select(schema_of_json(col("custom"))).as[String].first
df.withColumn(
"nestedCustom",
from_json(col("custom"), jsonSchema, Map[String, String]())
)
但是上述方法不起作用,并引发此异常:
Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot resolve 'schemaofjson(`custom`)' due to data type mismatch: The input json should be a string literal and not null; however, got `custom`.;;
'Project [schemaofjson(custom#7) AS schemaofjson(custom)#16]
请记住,我正在为此数据帧过滤custom
上的空值。
编辑:下面的完整代码。
import org.apache.spark.sql
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
/**
* RandomName entry point.
*
* @author Random author
*/
object RandomName {
def main(args: Array[String]): Unit = {
val spark: SparkSession = SparkSession.builder
.appName("RandomName")
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.config("spark.sql.parquet.fs.optimized.committer.optimization-enabled", true)
.getOrCreate
import spark.implicits._
val randomName: RandomName = new RandomName(spark)
val df: sql.DataFrame = randomName.read().filter($"custom".isNotNull)
val jsonSchema: String = df.select(schema_of_json(col("custom"))).as[String].first
df.withColumn(
"nestedCustom",
from_json(col("custom"), jsonSchema, Map[String, String]())
)
df.show
spark.stop
}
}
class RandomName(private val spark: SparkSession) {
/**
* Reads CSV files from S3 and creates a sql.DataFrame.
*
* @return a sql.DataFrame
*/
def read(): sql.DataFrame = {
val tableSchema = StructType(
Array(
StructField("a", StringType, true),
StructField("b", StringType, true),
StructField("c", DateType, true),
StructField("custom", StringType, true)
))
spark.read
.format("csv")
.option("sep", ";")
.option("header", "true")
.option("inferSchema", "true")
.schema(tableSchema)
.load("s3://random-bucket/*")
}
}
还有一个JSON示例:
{
"lvl1": {
"lvl2a": {
"lvl3a": {
"lvl4a": "random_data",
"lvl4b": "random_data"
}
},
"lvl2b": {
"lvl3a": {
"lvl4a": "ramdom_data"
},
"lvl3b": {
"lvl4a": "random_data",
"lvl4b": "random_data"
}
}
}
}
答案 0 :(得分:0)
这表明custom
不是schema_of_json
的有效输入
scala> spark.sql("SELECT schema_of_json(struct(1, 2))")
org.apache.spark.sql.AnalysisException: cannot resolve 'schemaofjson(named_struct('col1', 1, 'col2', 2))' due to data type mismatch: argument 1 requires string type, however, 'named_struct('col1', 1, 'col2', 2)' is of struct<col1:int,col2:int> type.; line 1 pos 7;
...
您应该返回数据并确保custom
确实是String
。