我有以下案例类:
+--------------------+------------------+
| col_name| data_type|
+--------------------+------------------+
| user_id| string|
+--------------------+------------------+
以及以下架构:
DataFrame
当我尝试将Dataset[User]
转换为带有spark.read.table("MyTable").as[User]
的已键入Exception in thread "main" org.apache.spark.sql.AnalysisException:
cannot resolve ''`user_id`' given input columns: [userId];;
时,我收到一个错误,指出字段名称不匹配:
user_id
是否有任何简单的方法可以解决这个问题,而不会破坏scala惯用语并命名我的字段Encoder
?当然,我的真实表有很多字段,而且我有更多的case类/表,所以为每个case类手动定义{{1}}是不可行的(而且我不太了解宏) - 那么,这是不可能的;虽然我很乐意使用一个,如果存在的话!)。
我觉得我错过了一个非常明显的“将snake_case转换为camelCase = true”选项,因为几乎任何我曾经使用过的ORM都存在这个选项。
答案 0 :(得分:0)
scala> val df = Seq(("Eric" ,"Theodore", "Cartman"), ("Butters", "Leopold", "Stotch")).toDF.select(concat($"_1", lit(" "), ($"_2")) as "first_and_middle_name", $"_3" as "last_name")
df: org.apache.spark.sql.DataFrame = [first_and_middle_name: string, last_name: string]
scala> df.show
+---------------------+---------+
|first_and_middle_name|last_name|
+---------------------+---------+
| Eric Theodore| Cartman|
| Butters Leopold| Stotch|
+---------------------+---------+
scala> val ccnames = df.columns.map(sc => {val ccn = sc.split("_")
| (ccn.head +: ccn.tail.map(_.capitalize)).mkString
| })
ccnames: Array[String] = Array(firstAndMiddleName, lastName)
scala> df.toDF(ccnames: _*).show
+------------------+--------+
|firstAndMiddleName|lastName|
+------------------+--------+
| Eric Theodore| Cartman|
| Butters Leopold| Stotch|
+------------------+--------+
编辑:这有帮助吗?定义一个需要加载程序的函数:String => DataFrame和路径:String。
scala> val parquetloader = spark.read.parquet _
parquetloader: String => org.apache.spark.sql.DataFrame = <function1>
scala> val tableloader = spark.read.table _
tableloader: String => org.apache.spark.sql.DataFrame = <function1>
scala> val textloader = spark.read.text _
textloader: String => org.apache.spark.sql.DataFrame = <function1>
// csv loader and others
def snakeCaseToCamelCaseDataFrameColumns(path: String, loader: String => DataFrame): DataFrame = {
val ccnames = loader(path).columns.map(sc => {val ccn = sc.split("_")
(ccn.head +: ccn.tail.map(_.capitalize)).mkString
})
df.toDF(ccnames: _*)
}
scala> :paste
// Entering paste mode (ctrl-D to finish)
def snakeCaseToCamelCaseDataFrameColumns(path: String, loader: String => DataFrame): DataFrame = {
val ccnames = loader(path).columns.map(sc => {val ccn = sc.split("_")
(ccn.head +: ccn.tail.map(_.capitalize)).mkString
})
df.toDF(ccnames: _*)
}
// Exiting paste mode, now interpreting.
snakeCaseToCamelCaseDataFrameColumns: (path: String, loader: String => org.apache.spark.sql.DataFrame)org.apache.spark.sql.DataFrame
val oneDF = snakeCaseToCamelCaseDataFrameColumns(tableloader("/path/to/table"))
val twoDF = snakeCaseToCamelCaseDataFrameColumns(parquetloader("/path/to/parquet/file"))