基于前一列的Spark Df Check Column值

时间:2018-12-30 15:24:59

标签: scala apache-spark

嗨,我一直坚持在spark DF上实现自定义条件。基本上,我想根据列中存在的Null值将列标记为0或1,即是否

  

列包含null,对应于该行的状态将为0   否则1

 val someData = Seq(
    Row(8, "bat"),
    Row(64, "mouse"),
    Row(null, "rat")
  )

  val someSchema = List(
    StructField("number", IntegerType, true),
    StructField("word", StringType, true)
  )

  val someDF = sparkSession.createDataFrame(
    sparkSession.sparkContext.parallelize(someData),
    StructType(someSchema)
  )
val fieldList: Seq[Column] = Seq(col("word"),col("number"))


 val df = fieldList.foldLeft(inputDf)(
      (inputDf, f) => {
       dfin = inputDf.withColumn(Status, lit(0))
        dfin
          .withColumn(
            Status,
            when(f.isNotNull and col("status").isin(0), 0).otherwise(1)
          )

      }

但是它基于fieldList的最后一列进行检查,但应该类似于

col 1  col2  status
zyx .  pqe .  0
null . zyz . 1
xdc . null  1
null  null  1

1 个答案:

答案 0 :(得分:1)

val df = someDF.withColumn("status", when(fieldList.map(x => col(x).isNull).reduce(_ || _), 1).otherwise(0)

想法是首先将每个列的名称转换为一列,并检查它是否为null(地图),如果至少一个为null,则简单的reduce会导致true