如何解决该异常:java.math.BigDecimal不是在doubled架构上重新在datadframe上重新应用架构时有效的外部类型?

时间:2019-02-06 07:00:10

标签: scala apache-spark hadoop hive apache-spark-sql

我正尝试通过以下方式将数据从表:system_releases从Greenplum移至Hive:

val yearDF = spark.read.format("jdbc").option("url", "urltemplate;MaxNumericScale=30;MaxNumericPrecision=40;")
                                      .option("dbtable", s"(${execQuery}) as year2016")
                                      .option("user", "user")
                                      .option("password", "pwd")
                                      .option("partitionColumn","release_number")
                                      .option("lowerBound", 306)
                                      .option("upperBound", 500)
                                      .option("numPartitions",2)
                                      .load()

通过spark推断dataFrame yearDF的模式:

description:string
status_date:timestamp
time_zone:string
table_refresh_delay_min:decimal(38,30)
online_patching_enabled_flag:string
release_number:decimal(38,30)
change_number:decimal(38,30)
interface_queue_enabled_flag:string
rework_enabled_flag:string
smart_transfer_enabled_flag:string
patch_number:decimal(38,30)
threading_enabled_flag:string
drm_gl_source_name:string
reverted_flag:string
table_refresh_delay_min_text:string
release_number_text:string
change_number_text:string

我在蜂巢上有相同的表,具有以下数据类型:

val hiveCols=string,status_date:timestamp,time_zone:string,table_refresh_delay_min:double,online_patching_enabled_flag:string,release_number:double,change_number:double,interface_queue_enabled_flag:string,rework_enabled_flag:string,smart_transfer_enabled_flag:string,patch_number:double,threading_enabled_flag:string,drm_gl_source_name:string,reverted_flag:string,table_refresh_delay_min_text:string,release_number_text:string,change_number_text:string

即使GP中的列很少,table_refresh_delay_min, release_number, change_number and patch_number列也给出了太多的小数点。 因此,我尝试将其保存为CSV文件,以查看spark如何读取数据。 例如,GP上的release_number的最大数量是:306.00,但是在csv文件中,我保存了数据框:yearDF,值变成306.000000000000000000。

我尝试采用配置单元表架构并将其转换为StructType,以将其应用于yearDF,如下所示。

def convertDatatype(datatype: String): DataType = {
  val convert = datatype match {
    case "string"     => StringType
    case "bigint"     => LongType
    case "int"        => IntegerType
    case "double"     => DoubleType
    case "date"       => TimestampType
    case "boolean"    => BooleanType
    case "timestamp"  => TimestampType
  }
  convert
}

val schemaList        = hiveCols.split(",")
val schemaStructType  = new StructType(schemaList.map(col => col.split(":")).map(e => StructField(e(0), convertDatatype(e(1)), true)))
val newDF = spark.createDataFrame(yearDF.rdd, schemaStructType)
newDF.write.format("csv").save("hdfs/location")

但是我得到了错误:

Caused by: java.lang.RuntimeException: java.math.BigDecimal is not a valid external type for schema of double
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.evalIfFalseExpr8$(Unknown Source)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply_2$(Unknown Source)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
    at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:287)
    ... 17 more

我试图通过以下方式将十进制列转换为DoubleType,但是我仍然面临相同的异常。

  val pattern = """DecimalType\(\d+,(\d+)\)""".r
  val df2 = dataDF.dtypes.
    collect{ case (dn, dt) if pattern.findFirstMatchIn(dt).map(_.group(1)).getOrElse("0") != "0" => dn }.
    foldLeft(dataDF)((accDF, c) => accDF.withColumn(c, col(c).cast("Double")))

   Caused by: java.lang.RuntimeException: java.math.BigDecimal is not a valid external type for schema of double
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.evalIfFalseExpr8$(Unknown Source)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply_2$(Unknown Source)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
    at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:287)
    ... 17 more

在尝试实现上述两种方法后,我失去了主意。 谁能告诉我如何将数据框的列正确地转换为所需的数据类型?

1 个答案:

答案 0 :(得分:0)

在这种情况下,将RDD转换为DF时,需要指定与使用的Spark模式完全相同的类型。

例如,当您在<ReferenceInput reference="yourTable" source="yourDistinctColumn" sort={{field: "yourDistinctColumn", order: "ASC"}}//or DESC, your choice filter={{distinct_on: "yourDistinctColumn"}} > <SelectInput optionText="yourDistinctColumn"/> </ReferenceInput> 数据框上执行printSchema时,您就得到了

yearDF

将RDD转换为DF时,对于这些字段具有description:string status_date:timestamp time_zone:string table_refresh_delay_min:decimal(38,30) online_patching_enabled_flag:string release_number:decimal(38,30) change_number:decimal(38,30) interface_queue_enabled_flag:string rework_enabled_flag:string smart_transfer_enabled_flag:string patch_number:decimal(38,30) threading_enabled_flag:string drm_gl_source_name:string reverted_flag:string table_refresh_delay_min_text:string release_number_text:string change_number_text:string 必须指定为decimal(38,30),而不是您使用的DecimalType(38,30)

希望有帮助!