我正在尝试编写将RDD转换为Dataset的示例Apache Spark程序。但在这个过程中,我收到编译时错误。
以下是我的示例代码和错误:
码
import org.apache.spark.SparkConf
import org.apache.spark.rdd.RDD
import org.apache.spark.SparkContext
import org.apache.spark.sql.Dataset
object Hello {
case class Person(name: String, age: Int)
def main(args: Array[String]){
val conf = new SparkConf()
.setAppName("first example")
.setMaster("local")
val sc = new SparkContext(conf)
val peopleRDD: RDD[Person] = sc.parallelize(Seq(Person("John", 27)))
val people = peopleRDD.toDS
}
}
,我的错误是:
value toDS is not a member of org.apache.spark.rdd.RDD[Person]
我添加了Spark core和spark SQL jar。
我的版本是:
Spark 1.6.2
scala 2.10
答案 0 :(得分:5)
cloud-init
可与toDS
sqlContext.implicits._
val sqlContext = new SQLContext(sc);
import sqlContext.implicits._
val people = peopleRDD.toDS()
HIH
答案 1 :(得分:3)
我可以在你的代码中看到两个错误。
首先,import sqlContext.implicits._
为toDS
,toDF
定义为sqlContext的含义。
其次是case class
应该在使用case类的类范围之外定义,否则将task not serializable exception
发生
完整的解决方案如下
import org.apache.spark.SparkConf
import org.apache.spark.rdd.RDD
import org.apache.spark.SparkContext
import org.apache.spark.sql.Dataset
object Hello {
def main(args: Array[String]){
val conf = new SparkConf()
.setAppName("first example")
.setMaster("local")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
import sqlContext.implicits._
val peopleRDD: RDD[Person] = sc.parallelize(Seq(Person("John", 27)))
val people = peopleRDD.toDS
people.show(false)
}
}
case class Person(name: String, age: Int)