如何计算数据帧-spark scala每行中缺失值的数量?

时间:2018-11-18 18:28:09

标签: scala apache-spark apache-spark-sql spark-streaming

我想计算Spark Scala中数据帧每一行中缺失值的数量。

代码:

val samplesqlDF = spark.sql("SELECT * FROM sampletable")

samplesqlDF.show()

输入数据框:

    ------------------------------------------------------------------
   | name       |     age             |  degree    | Place            |
   | -----------------------------------------------------------------|
   | Ram        |                     |    MCA     | Bangalore        |
   |            |     25              |            |                  |
   |            |     26              |     BE     |                  |
   | Raju       |     21              |     Btech  |  Chennai         |
   -----------------------------------------------------------------

输出数据帧(行数计数)如下:

    -----------------------------------------------------------------
   | name       |     age   |  degree    | Place      |   rowcount   |
   | ----------------------------------------------------------------|
   | Ram        |           |    MCA     | Bangalore  |   1          |
   |            |     25    |            |            |   3          |
   |            |     26    |     BE     |            |   2          |
   | Raju       |     21    |    Btech   |  Chennai   |   0          | 
   -----------------------------------------------------------------

我是scala和spark的初学者。预先感谢。

2 个答案:

答案 0 :(得分:0)

好像您想以动态方式获取空计数。检查一下

val df = Seq(("Ram",null,"MCA","Bangalore"),(null,"25",null,null),(null,"26","BE",null),("Raju","21","Btech","Chennai")).toDF("name","age","degree","Place")
df.show(false)
val df2 = df.columns.foldLeft(df)( (df,c) => df.withColumn(c+"_null", when(col(c).isNull,1).otherwise(0) ) )
df2.createOrReplaceTempView("student")
val sql_str_null = df.columns.map( x => x+"_null").mkString(" ","+"," as null_count ")
val sql_str_full = df.columns.mkString( "select ", ",", " , " + sql_str_null + " from student")
spark.sql(sql_str_full).show(false)

输出:

+----+----+------+---------+----------+
|name|age |degree|Place    |null_count|
+----+----+------+---------+----------+
|Ram |null|MCA   |Bangalore|1         |
|null|25  |null  |null     |3         |
|null|26  |BE    |null     |2         |
|Raju|21  |Btech |Chennai  |0         |
+----+----+------+---------+----------+

答案 1 :(得分:0)

还有一种可能性,也检查“”,但不使用foldLeft只是为了证明这一点:

import org.apache.spark.sql.functions._

val df = Seq(("Ram",null,"MCA","Bangalore"),(null,"25",null,""),(null,"26","BE",null),("Raju","21","Btech","Chennai")).toDF("name","age","degree","place")

// Count per row the null or "" columns! 
val null_counter = Seq("name", "age", "degree", "place").map(x => when(col(x) === "" || col(x).isNull , 1).otherwise(0)).reduce(_ + _)  

val df2 = df.withColumn("nulls_cnt", null_counter)

df2.show(false)

返回:

 +----+----+------+---------+---------+
 |name|age |degree|place    |nulls_cnt|
 +----+----+------+---------+---------+
 |Ram |null|MCA   |Bangalore|1        |
 |null|25  |null  |         |3        |
 |null|26  |BE    |null     |2        |
 |Raju|21  |Btech |Chennai  |0        |
 +----+----+------+---------+---------+