如何拆分Spark?

时间:2017-04-08 11:20:07

标签: scala hadoop apache-spark

我在一个RDD中有数据,数据如下:

scala> c_data
res31: org.apache.spark.rdd.RDD[String] = /home/t_csv MapPartitionsRDD[26] at textFile at <console>:25

scala> c_data.count()
res29: Long = 45212                                                             

scala> c_data.take(2).foreach(println)
age;job;marital;education;default;balance;housing;loan;contact;day;month;duration;campaign;pdays;previous;poutcome;y
58;management;married;tertiary;no;2143;yes;no;unknown;5;may;261;1;-1;0;unknown;no

我想将数据拆分为另一个rdd,我正在使用:

scala> val csv_data = c_data.map{x=>
 | val w = x.split(";")
 | val age = w(0)
 | val job = w(1)
 | val marital_stat = w(2)
 | val education = w(3)
 | val default = w(4)
 | val balance = w(5)
 | val housing = w(6)
 | val loan = w(7)
 | val contact = w(8)
 | val day = w(9)
 | val month = w(10)
 | val duration = w(11)
 | val campaign = w(12)
 | val pdays = w(13)
 | val previous = w(14)
 | val poutcome = w(15)
 | val Y = w(16)
 | }

返回:

csv_data: org.apache.spark.rdd.RDD[Unit] = MapPartitionsRDD[28] at map at <console>:27

当我查询csv_data时,它返回Array((),....)。 如何将第一行的数据作为标题获取并将其作为数据? 我在哪里做错了?

先谢谢。

1 个答案:

答案 0 :(得分:1)

您的映射函数返回Unit,因此您映射到RDD[Unit]。您可以通过将代码更改为

来获取值的元组
 val csv_data = c_data.map{x=>
   val w = x.split(";")
   ...
   val Y = w(16)
   (w, age, job, marital_stat, education, default, balance, housing, loan, contact, day, month, duration, campaign, pdays, previous, poutcome, Y)
}