计算与数据帧的总和时的精度损失

时间:2017-02-10 11:57:50

标签: scala apache-spark spark-dataframe

我有一个包含此类数据的Dataframe:

unit,sensitivity currency,trading desk  ,portfolio       ,issuer        ,bucket ,underlying ,delta        ,converted sensitivity
ES  ,USD                 ,EQ DERIVATIVES,ESEQRED_LH_MIDX ,5GOY          ,5      ,repo       ,0.00002      ,0.00002
ES  ,USD                 ,EQ DERIVATIVES,IND_GLOBAL1     ,no_localizado ,8      ,repo       ,-0.16962     ,-0.15198
ES  ,EUR                 ,EQ DERIVATIVES,ESEQ_UKFLOWN    ,IGN2          ,8      ,repo       ,-0.00253     ,-0.00253
ES  ,USD                 ,EQ DERIVATIVES,BASKETS1        ,9YFV          ,5      ,spot       ,-1003.64501  ,-899.24586

我必须对这些数据进行聚合操作,做这样的事情:

val filteredDF = myDF.filter("unit = 'ES' AND `trading desk` = 'EQ DERIVATIVES' AND issuer = '5GOY' AND bucket = 5 AND underlying = 'repo' AND portfolio ='ESEQRED_LH_MIDX'")
                     .groupBy("unit","trading desk","portfolio","issuer","bucket","underlying")
                     .agg(sum("converted_sensitivity"))

但是我看到我在聚合总和上失去了精度,所以我怎么能确定在对新聚合进行求和操作之前,“converted_sensitivity”的每个值都转换为BigDecimal(25,5)。列?

非常感谢。

1 个答案:

答案 0 :(得分:1)

要确保转换,您可以使用DataFrame中的DecimalType

根据Spark文档,DecimalType是:

  

表示java.math.BigDecimal值的数据类型。必须具有固定精度(最大位数)和小数位数(小数点右侧的位数)的小数。   精度可高达38,刻度也可高达38(精度小于或等于)。   默认精度和比例为(10,0)。

您可以看到此here

要转换数据,您可以使用cast对象的函数Column。像这样:

import org.apache.spark.sql.types.DecimalType

val filteredDF = myDF.filter("unit = 'ES' AND `trading desk` = 'EQ DERIVATIVES' AND issuer = '5GOY' AND bucket = 5 AND underlying = 'repo' AND portfolio ='ESEQRED_LH_MIDX'")
                 .withColumn("new_column_big_decimal", col("converted_sensitivity").cast(DecimalType(25,5))
                 .groupBy("unit","trading desk","portfolio","issuer","bucket","underlying")
                 .agg(sum("new_column_big_decimal"))