如何在scala.math.BigDecimal上检查null?

时间:2017-07-18 19:16:46

标签: scala apache-spark

以下代码引发NullPointerException。即使有Option(x._1.F2).isDefined && Option(x._2.F2).isDefined来阻止空值?

case class Cols (F1: String, F2: BigDecimal, F3: Int, F4: Date, ...)

def readTable() : DataSet[Cols] = {
    import sqlContext.sparkSession.implicits._

    sqlContext.read.format("jdbc").options(Map(
      "driver" -> "com.microsoft.sqlserver.jdbc.SQLServerDriver",
      "url" -> jdbcSqlConn,
      "dbtable" -> s"..."
    )).load()
      .select("F1", "F2", "F3", "F4")
      .as[Cols]
  }

import org.apache.spark.sql.{functions => func}
val j = readTable().joinWith(readTable(), func.lit(true))
readTable().filter(x => 
  (if (Option(x._1.F2).isDefined && Option(x._2.F2).isDefined 
       && (x._1.F2- x._2.F2< 1)) 1 else 0)  //line 51
  + ..... > 100)

我尝试了!(x._1.F2== null || x._2.F2== null),但仍然有例外。

例外是

java.lang.NullPointerException
        at scala.math.BigDecimal.$minus(BigDecimal.scala:563)
        at MappingPoint$$anonfun$compare$1.apply(MappingPoint.scala:51)
        at MappingPoint$$anonfun$compare$1.apply(MappingPoint.scala:44)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:395)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:234)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:228)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:108)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.lang.Thread.run(Unknown Source)

更新 我尝试了以下表达式,执行仍然在x._1.F2- x._2.F2行。这是检查BigDecimal是否为空的方法吗?

(if (!(Option(x._1.F2).isDefined && Option(x._2.F2).isDefined
       && x._1.F2!= null && x._2.F2!= null)) 0
       else (if (x._1.F2- x._2.F2< 1) 1 else 0))

更新2

我将减号包装到(math.abs((l.F2 - r.F2).toDouble)后,异常消失了。 为什么呢?

2 个答案:

答案 0 :(得分:0)

尝试将此添加到您的if声明:

&& (x._1.F2 && x._2.F2) != null

我在Java中遇到了类似的问题,这对我有用。

答案 1 :(得分:0)

查看BigDecimal的源代码,在第563行: https://github.com/scala/scala/blob/v2.11.8/src/library/scala/math/BigDecimal.scala#L563

x._1.F2.bigDecimalx._2.F2.bigDecimal可能是null,但我不确定如果发生这种情况会发生这种情况,因为构造函数会对此进行检查。但也许在那里检查null,看看是否能解决问题?

顺便说一句,你应该真的避免所有的._1._2 ......你应该能够做到这样的事情:

val (l: Cols, r: Cols) = x

提取元组值。