使用Scala 2.11.8的Spark 2.0(最终版)。以下超级简单代码产生编译错误Error:(17, 45) Unable to find encoder for type stored in a Dataset. Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._ Support for serializing other types will be added in future releases.
import org.apache.spark.sql.SparkSession
case class SimpleTuple(id: Int, desc: String)
object DatasetTest {
val dataList = List(
SimpleTuple(5, "abc"),
SimpleTuple(6, "bcd")
)
def main(args: Array[String]): Unit = {
val sparkSession = SparkSession.builder.
master("local")
.appName("example")
.getOrCreate()
val dataset = sparkSession.createDataset(dataList)
}
}
答案 0 :(得分:72)
Spark Datasets
需要Encoders
表示即将存储的数据类型。对于常见类型(原子,产品类型),有许多预定义的编码器可用,但您必须先从SparkSession.implicits
导入它们才能使其正常工作:
val sparkSession: SparkSession = ???
import sparkSession.implicits._
val dataset = sparkSession.createDataset(dataList)
或者,您可以直接提供明确的
import org.apache.spark.sql.{Encoder, Encoders}
val dataset = sparkSession.createDataset(dataList)(Encoders.product[SimpleTuple])
或隐含
implicit val enc: Encoder[SimpleTuple] = Encoders.product[SimpleTuple]
val dataset = sparkSession.createDataset(dataList)
Encoder
表示存储的类型。
请注意,Enocders
还为原子类型提供了许多预定义的Encoders
,而对于复杂类型,Encoders
可以为ExpressionEncoder
提供。
进一步阅读:
Row
个对象,您必须明确提供Encoder
,如Encoder error while trying to map dataframe row to updated row 答案 1 :(得分:42)
对于其他用户(您的用户是正确的),请注意,在case class
范围之外定义object
也很重要。所以:
失败:
object DatasetTest {
case class SimpleTuple(id: Int, desc: String)
val dataList = List(
SimpleTuple(5, "abc"),
SimpleTuple(6, "bcd")
)
def main(args: Array[String]): Unit = {
val sparkSession = SparkSession.builder
.master("local")
.appName("example")
.getOrCreate()
val dataset = sparkSession.createDataset(dataList)
}
}
添加含义仍然失败并出现相同的错误:
object DatasetTest {
case class SimpleTuple(id: Int, desc: String)
val dataList = List(
SimpleTuple(5, "abc"),
SimpleTuple(6, "bcd")
)
def main(args: Array[String]): Unit = {
val sparkSession = SparkSession.builder
.master("local")
.appName("example")
.getOrCreate()
import sparkSession.implicits._
val dataset = sparkSession.createDataset(dataList)
}
}
使用:
case class SimpleTuple(id: Int, desc: String)
object DatasetTest {
val dataList = List(
SimpleTuple(5, "abc"),
SimpleTuple(6, "bcd")
)
def main(args: Array[String]): Unit = {
val sparkSession = SparkSession.builder
.master("local")
.appName("example")
.getOrCreate()
import sparkSession.implicits._
val dataset = sparkSession.createDataset(dataList)
}
}
这是相关的错误:https://issues.apache.org/jira/browse/SPARK-13540,所以希望它将在Spark 2的下一个版本中修复。
(编辑:看起来这个bugfix实际上是在Spark 2.0.0中......所以我不确定为什么这仍然会失败)。
答案 2 :(得分:-1)
我澄清回答我自己的问题,如果目标是定义一个简单的文字SparkData框架,而不是使用Scala元组和隐式转换,那么更简单的方法是直接使用Spark API这样:
import org.apache.spark.sql._
import org.apache.spark.sql.types._
import scala.collection.JavaConverters._
val simpleSchema = StructType(
StructField("a", StringType) ::
StructField("b", IntegerType) ::
StructField("c", IntegerType) ::
StructField("d", IntegerType) ::
StructField("e", IntegerType) :: Nil)
val data = List(
Row("001", 1, 0, 3, 4),
Row("001", 3, 4, 1, 7),
Row("001", null, 0, 6, 4),
Row("003", 1, 4, 5, 7),
Row("003", 5, 4, null, 2),
Row("003", 4, null, 9, 2),
Row("003", 2, 3, 0, 1)
)
val df = spark.createDataFrame(data.asJava, simpleSchema)