我正在尝试使用Spark将数据从greenplum移到HDFS。我可以从源表成功读取数据,并且数据框(greenplum表)的火花推断模式为:
DataFrame架构:
je_header_id: long (nullable = true)
je_line_num: long (nullable = true)
last_updated_by: decimal(15,0) (nullable = true)
last_updated_by_name: string (nullable = true)
ledger_id: long (nullable = true)
code_combination_id: long (nullable = true)
balancing_segment: string (nullable = true)
cost_center_segment: string (nullable = true)
period_name: string (nullable = true)
effective_date: timestamp (nullable = true)
status: string (nullable = true)
creation_date: timestamp (nullable = true)
created_by: decimal(15,0) (nullable = true)
entered_dr: decimal(38,20) (nullable = true)
entered_cr: decimal(38,20) (nullable = true)
entered_amount: decimal(38,20) (nullable = true)
accounted_dr: decimal(38,20) (nullable = true)
accounted_cr: decimal(38,20) (nullable = true)
accounted_amount: decimal(38,20) (nullable = true)
xx_last_update_log_id: integer (nullable = true)
source_system_name: string (nullable = true)
period_year: decimal(15,0) (nullable = true)
period_num: decimal(15,0) (nullable = true)
Hive表的对应架构为:
je_header_id:bigint|je_line_num:bigint|last_updated_by:bigint|last_updated_by_name:string|ledger_id:bigint|code_combination_id:bigint|balancing_segment:string|cost_center_segment:string|period_name:string|effective_date:timestamp|status:string|creation_date:timestamp|created_by:bigint|entered_dr:double|entered_cr:double|entered_amount:double|accounted_dr:double|accounted_cr:double|accounted_amount:double|xx_last_update_log_id:int|source_system_name:string|period_year:bigint|period_num:bigint
使用上面提到的Hive表架构,我使用以下逻辑创建了以下StructType:
def convertDatatype(datatype: String): DataType = {
val convert = datatype match {
case "string" => StringType
case "bigint" => LongType
case "int" => IntegerType
case "double" => DoubleType
case "date" => TimestampType
case "boolean" => BooleanType
case "timestamp" => TimestampType
}
convert
}
准备好的架构:
je_header_id: long (nullable = true)
je_line_num: long (nullable = true)
last_updated_by: long (nullable = true)
last_updated_by_name: string (nullable = true)
ledger_id: long (nullable = true)
code_combination_id: long (nullable = true)
balancing_segment: string (nullable = true)
cost_center_segment: string (nullable = true)
period_name: string (nullable = true)
effective_date: timestamp (nullable = true)
status: string (nullable = true)
creation_date: timestamp (nullable = true)
created_by: long (nullable = true)
entered_dr: double (nullable = true)
entered_cr: double (nullable = true)
entered_amount: double (nullable = true)
accounted_dr: double (nullable = true)
accounted_cr: double (nullable = true)
accounted_amount: double (nullable = true)
xx_last_update_log_id: integer (nullable = true)
source_system_name: string (nullable = true)
period_year: long (nullable = true)
period_num: long (nullable = true)
当我尝试将newSchema应用于数据框Schema时,出现异常:
java.lang.RuntimeException: java.math.BigDecimal is not a valid external type for schema of bigint
我知道它正在尝试将BigDecimal
转换为Bigint
,但失败了,但是有人可以告诉我如何将bigint转换为spark兼容的数据类型吗?
如果没有,我该如何修改我的逻辑以在case语句中为此bigint / bigdecimal问题提供适当的数据类型?
答案 0 :(得分:1)
在这里看到您的问题,似乎您正在尝试将bigint值转换为大十进制,这是不对的。 Bigdecimal
是一个十进制数,必须具有固定的精度(最大位数)和小数位数(点右侧的位数)。而且您的价值似乎很长。
在这里,而不是使用BigDecimal
数据类型,请尝试使用LongType
正确转换bigint
值。看看这是否解决了您的目的。