我是Scala的新手。我正在尝试将scala列表(将源数据帧上的某些计算数据的结果保存)转换为Dataframe或Dataset。我没有找到任何直接的方法来做到这一点。 但是,我尝试了以下过程将我的列表转换为DataSet,但它似乎无法正常工作。我提供以下3种情况。
有人可以给我一些希望,如何进行这种转换?感谢。
import org.apache.spark.sql.{DataFrame, Row, SQLContext, DataFrameReader}
import java.sql.{Connection, DriverManager, ResultSet, Timestamp}
import scala.collection._
case class TestPerson(name: String, age: Long, salary: Double)
var tom = new TestPerson("Tom Hanks",37,35.5)
var sam = new TestPerson("Sam Smith",40,40.5)
val PersonList = mutable.MutableList[TestPerson]()
//Adding data in list
PersonList += tom
PersonList += sam
//Situation 1: Trying to create dataset from List of objects:- Result:Error
//Throwing error
var personDS = Seq(PersonList).toDS()
/*
ERROR:
error: Unable to find encoder for type stored in a Dataset. Primitive types
(Int, String, etc) and Product types (case classes) are supported by
importing sqlContext.implicits._ Support for serializing other types will
be added in future releases.
var personDS = Seq(PersonList).toDS()
*/
//Situation 2: Trying to add data 1-by-1 :- Result: not working as desired.
the last record overwriting any existing data in the DS
var personDS = Seq(tom).toDS()
personDS = Seq(sam).toDS()
personDS += sam //not working. throwing error
//Situation 3: Working. However, I am having consolidated data in the list
which I want to convert to DS; if I loop the results of the list in comma
separated values and then pass that here, it will work but will create an
extra loop in the code, which I want to avoid.
var personDS = Seq(tom,sam).toDS()
scala> personDS.show()
+---------+---+------+
| name|age|salary|
+---------+---+------+
|Tom Hanks| 37| 35.5|
|Sam Smith| 40| 40.5|
+---------+---+------+
答案 0 :(得分:12)
尝试不使用Seq
:
case class TestPerson(name: String, age: Long, salary: Double)
val tom = TestPerson("Tom Hanks",37,35.5)
val sam = TestPerson("Sam Smith",40,40.5)
val PersonList = mutable.MutableList[TestPerson]()
PersonList += tom
PersonList += sam
val personDS = PersonList.toDS()
println(personDS.getClass)
personDS.show()
val personDF = PersonList.toDF()
println(personDF.getClass)
personDF.show()
personDF.select("name", "age").show()
<强>输出:强>
class org.apache.spark.sql.Dataset
+---------+---+------+
| name|age|salary|
+---------+---+------+
|Tom Hanks| 37| 35.5|
|Sam Smith| 40| 40.5|
+---------+---+------+
class org.apache.spark.sql.DataFrame
+---------+---+------+
| name|age|salary|
+---------+---+------+
|Tom Hanks| 37| 35.5|
|Sam Smith| 40| 40.5|
+---------+---+------+
+---------+---+
| name|age|
+---------+---+
|Tom Hanks| 37|
|Sam Smith| 40|
+---------+---+
此外,请务必移动案例类TestPerson
outside the scope of your object的声明。
答案 1 :(得分:1)
使用序列:
val spark = SparkSession.builder().appName("Spark-SQL").master("local[2]").getOrCreate()
import spark.implicits._
var tom = new TestPerson("Tom Hanks",37,35.5)
var sam = new TestPerson("Sam Smith",40,40.5)
val PersonList = mutable.MutableList[TestPerson]()
//Adding data in list
PersonList += tom
PersonList += sam
//It will be work.
var personDS = Seq(PersonList).toDS()
答案 2 :(得分:1)
case class TestPerson(name: String, age: Long, salary: Double)
val spark = SparkSession.builder().appName("List to Dataset").master("local[*]").getOrCreate()
var tom = new TestPerson("Tom Hanks",37,35.5)
var sam = new TestPerson("Sam Smith",40,40.5)
// mutable.MutableList[TestPerson]() is not required , i used below way which was
// cleaner
val PersonList = List(tom,sam)
import spark.implicits._
PersonList.toDS().show