如何在数据框

时间:2015-07-21 15:10:20

标签: csv apache-spark apache-zeppelin spark-dataframe

我正在尝试使用Apache Zeppelin笔记本使用spark-csv [1]将CSV文件加载到Spark数据框中,并且在加载没有值的数字字段时,解析器对该行失败并且线被跳过。

我原本期望该行被加载,数据框中的值加载该行并将值设置为NULL,以便聚合只是忽略该值。

%dep
z.reset()
z.addRepo("my-nexus").url("<my_local_nexus_repo_that_is_a_proxy_of_public_repos>")
z.load("com.databricks:spark-csv_2.10:1.1.0")


%spark
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.types._
import com.databricks.spark.csv._
import org.apache.spark.sql.functions._

val schema = StructType(
    StructField("identifier", StringType, true) ::
    StructField("name", StringType, true) ::
    StructField("height", DoubleType, true) :: 
    Nil)

val sqlContext = new SQLContext(sc)
val df = sqlContext.read.format("com.databricks.spark.csv")
                        .schema(schema)
                        .option("header", "true")
                        .load("file:///home/spark_user/data.csv")

df.describe("height").show()

以下是数据文件的内容:/home/spark_user/data.csv

identifier,name,height
1,sam,184
2,cath,180
3,santa,     <-- note that there is not height recorded for Santa !

这是输出:

+-------+------+
|summary|height|
+-------+------+
|  count|     2|    <- 2 of 3 lines loaded, ie. sam and cath
|   mean| 182.0|
| stddev|   2.0|
|    min| 180.0|
|    max| 184.0|
+-------+------+

在zeppelin的日志中,我在解析圣诞老人的行时会看到以下错误:

ERROR [2015-07-21 16:42:09,940] ({Executor task launch worker-45} CsvRelation.scala[apply]:209) - Exception while parsing line: 3,santa,.
        java.lang.NumberFormatException: empty String
        at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1842)
        at sun.misc.FloatingDecimal.parseDouble(FloatingDecimal.java:110)
        at java.lang.Double.parseDouble(Double.java:538)
        at scala.collection.immutable.StringLike$class.toDouble(StringLike.scala:232)
        at scala.collection.immutable.StringOps.toDouble(StringOps.scala:31)
        at com.databricks.spark.csv.util.TypeCast$.castTo(TypeCast.scala:42)
        at com.databricks.spark.csv.CsvRelation$$anonfun$com$databricks$spark$csv$CsvRelation$$parseCSV$1.apply(CsvRelation.scala:198)
        at com.databricks.spark.csv.CsvRelation$$anonfun$com$databricks$spark$csv$CsvRelation$$parseCSV$1.apply(CsvRelation.scala:180)
        at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
        at org.apache.spark.sql.execution.Aggregate$$anonfun$doExecute$1$$anonfun$6.apply(Aggregate.scala:129)
        at org.apache.spark.sql.execution.Aggregate$$anonfun$doExecute$1$$anonfun$6.apply(Aggregate.scala:126)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:686)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:686)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:70)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

所以你可以告诉我到目前为止这么好......你是对的;)

现在我想添加一个额外的列,比如年龄,我总是在该字段中有数据。

identifier,name,height,age
1,sam,184,30
2,cath,180,32
3,santa,,70

现在礼貌地询问有关年龄的一些统计数据:

%spark
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.types._
import com.databricks.spark.csv._
import org.apache.spark.sql.functions._

val schema = StructType(
    StructField("identifier", StringType, true) ::
    StructField("name", StringType, true) ::
    StructField("height", DoubleType, true) :: 
    StructField("age", DoubleType, true) :: 
    Nil)

val sqlContext = new SQLContext(sc)
val df = sqlContext.read.format("com.databricks.spark.csv")
                        .schema(schema)
                        .option("header", "true")
                        .load("file:///home/spark_user/data2.csv")

df.describe("age").show()

结果

+-------+----+
|summary| age|
+-------+----+
|  count|   2|
|   mean|31.0|
| stddev| 1.0|
|    min|30.0|
|    max|32.0|
+-------+----+

全错!由于圣诞老人的身高不明,整条线都会丢失,年龄的计算仅基于Sam和Cath,而圣诞老人的年龄完全有效。

我的问题是,插入圣诞老人的高度需要多大的价值才能加载CSV。我试图将架构设置为所有StringType,然后

下一个问题更多是关于

我在API中发现可以使用spark处理N / A值。所以我想也许我可以加载我的数据,所有列都设置为StringType,然后做一些清理,然后只按照下面的说明正确设置架构:

%spark
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.types._
import com.databricks.spark.csv._
import org.apache.spark.sql.functions._

val schema = StructType(
StructField("identifier", StringType, true) ::
StructField("name", StringType, true) ::
StructField("height", StringType, true) ::
StructField("age", StringType, true) ::
Nil)

val sqlContext = new SQLContext(sc)
val df = sqlContext.read.format("com.databricks.spark.csv").schema(schema).option("header", "true").load("file:///home/spark_user/data.csv")

// eg. for each column of my dataframe, replace empty string by null
df.na.replace( "*", Map("" -> null) )

val toDouble = udf[Double, String]( _.toDouble)
df2 = df.withColumn("age", toDouble(df("age")))

df2.describe("age").show()

但是df.na.replace()抛出异常并停止:

java.lang.IllegalArgumentException: Unsupported value type java.lang.String ().
        at org.apache.spark.sql.DataFrameNaFunctions.org$apache$spark$sql$DataFrameNaFunctions$$convertToDouble(DataFrameNaFunctions.scala:417)
        at org.apache.spark.sql.DataFrameNaFunctions$$anonfun$4.apply(DataFrameNaFunctions.scala:337)
        at org.apache.spark.sql.DataFrameNaFunctions$$anonfun$4.apply(DataFrameNaFunctions.scala:337)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
        at scala.collection.AbstractTraversable.map(Traversable.scala:105)
        at org.apache.spark.sql.DataFrameNaFunctions.replace0(DataFrameNaFunctions.scala:337)
        at org.apache.spark.sql.DataFrameNaFunctions.replace(DataFrameNaFunctions.scala:304)

任何帮助,&amp;提示非常赞赏!!

[1] https://github.com/databricks/spark-csv

1 个答案:

答案 0 :(得分:5)

Spark-csv缺少此选项。它在主分支中has been fixed。我想你应该使用它或等待下一个稳定版本。