如何处理数据集-Spark / Scala中的缺失值输入

时间:2017-01-17 20:32:14

标签: scala apache-spark spark-dataframe

我有这样的数据

8213034705_cst,95,2.927373,jake7870,0,95,117.5,xbox,3
,10,0.18669,parakeet2004,5,1,120,xbox,3
8213060420_gfd,26,0.249757,bluebubbles_1,25,1,120,xbox,3
8213060420_xcv,80,0.59059,sa4741,3,1,120,xbox,3
,75,0.657384,jhnsn2273,51,1,120,xbox,3

我试图把"缺少价值"到记录丢失的第一列(或完全删除它们)。我正在尝试执行以下代码,但它给了我错误

import org.apache.spark._
import org.apache.spark.SparkContext._
import org.apache.spark.sql._
import org.apache.log4j._
import org.apache.spark.sql.functions
import java.lang.String
import org.apache.spark.sql.functions.udf
//import spark.implicits._


object DocParser2
{

 case class Auction(auctionid:Option[String], bid:Double, bidtime:Double, bidder:String, bidderrate:Integer, openbid:Double, price:Double, item:String, daystolive:Integer)

 def readint(ip:Option[String]):String = ip match
{

  case Some(ip) => ip.split("_")(0)
  case None => "missing value"

}





 def main(args:Array[String]) =
 {

   val spark=SparkSession.builder.appName("DocParser").master("local[*]").getOrCreate()

   import spark.implicits._





   val  intUDF  =   udf(readint _)

   val lines=spark.read.format("csv").option("header","false").option("inferSchema", true).load("data/auction2.csv").toDF("auctionid","bid","bidtime","bidder","bidderrate","openbid","price","item","daystolive")

   val recordsDS=lines.as[Auction]

   recordsDS.printSchema()



   println("splitting auction id into String and Int")

   // recordsDS.withColumn("auctionid_int",java.lang.String.split('auctionid,"_")).show() some error with the split method

   val auctionidcol=recordsDS.col("auctionid")

   recordsDS.withColumn("auctionid_int",intUDF('auctionid)).show() 

   spark.stop()

 }

}

但它是通过以下运行时错误

  

无法在行int intUDF中将java.lang.String转换为Scala.option   = udf(readint _)

你可以帮我弄清楚错误吗?

由于

2 个答案:

答案 0 :(得分:0)

UDF从来没有Option作为输入,而是需要传递实际类型。如果是String,你可以在你的UDF中进行空值检查,对于不能为空的基本类型(Int,Double等),还有其他解决方案......

答案 1 :(得分:0)

您可以使用csv阅读spark.read.csv文件,并使用na.drop()删除包含缺失值的记录,并在spark 2.0.2上进行测试:

val df = spark.read.option("header", "false").option("inferSchema", "true").csv("Path to Csv file")

df.show
+--------------+---+--------+-------------+---+---+-----+----+---+
|           _c0|_c1|     _c2|          _c3|_c4|_c5|  _c6| _c7|_c8|
+--------------+---+--------+-------------+---+---+-----+----+---+
|8213034705_cst| 95|2.927373|     jake7870|  0| 95|117.5|xbox|  3|
|          null| 10| 0.18669| parakeet2004|  5|  1|120.0|xbox|  3|
|8213060420_gfd| 26|0.249757|bluebubbles_1| 25|  1|120.0|xbox|  3|
|8213060420_xcv| 80| 0.59059|       sa4741|  3|  1|120.0|xbox|  3|
|          null| 75|0.657384|    jhnsn2273| 51|  1|120.0|xbox|  3|
+--------------+---+--------+-------------+---+---+-----+----+---+

df.na.drop().show
+--------------+---+--------+-------------+---+---+-----+----+---+
|           _c0|_c1|     _c2|          _c3|_c4|_c5|  _c6| _c7|_c8|
+--------------+---+--------+-------------+---+---+-----+----+---+
|8213034705_cst| 95|2.927373|     jake7870|  0| 95|117.5|xbox|  3|
|8213060420_gfd| 26|0.249757|bluebubbles_1| 25|  1|120.0|xbox|  3|
|8213060420_xcv| 80| 0.59059|       sa4741|  3|  1|120.0|xbox|  3|
+--------------+---+--------+-------------+---+---+-----+----+---+