我有一些sql.Row
个对象,我希望在Spark 1.6.x中转换为DataFrame
我的行看起来像:
events: scala.collection.immutable.Iterable[org.apache.spark.sql.Row] = List([14183197,Browse,80161702,8702170626376335,59,527780275219,List(NavigationLevel, Session)], [14183197,Browse,80161356,8702171157207449,72,527780278061,List(StartPlay, Action, Session)])
打印出来:
events.foreach(println)
[14183197,Browse,80161702,8702170626376335,59,527780275219,List(NavigationLevel, Session)]
[14183197,Browse,80161356,8702171157207449,72,527780278061,List(StartPlay, Action, Session)]
所以我为数据创建了一个模式;
val schema = StructType(Array(
StructField("trackId", IntegerType, true),
StructField("location", StringType, true),
StructField("videoId", IntegerType, true),
StructField("id", StringType, true),
StructField("sequence", IntegerType, true),
StructField("time", StringType, true),
StructField("type", ArrayType(StringType), true)
))
然后我尝试通过:
创建DataFrame
val df = sqlContext.createDataFrame(events, schema)
但我得到以下错误;
error: overloaded method value createDataFrame with alternatives:
(data: java.util.List[_],beanClass: Class[_])org.apache.spark.sql.DataFrame <and>
(rdd: org.apache.spark.api.java.JavaRDD[_],beanClass: Class[_])org.apache.spark.sql.DataFrame <and>
(rdd: org.apache.spark.rdd.RDD[_],beanClass: Class[_])org.apache.spark.sql.DataFrame <and>
(rows: java.util.List[org.apache.spark.sql.Row],schema: org.apache.spark.sql.types.StructType)org.apache.spark.sql.DataFrame <and>
(rowRDD: org.apache.spark.api.java.JavaRDD[org.apache.spark.sql.Row],schema: org.apache.spark.sql.types.StructType)org.apache.spark.sql.DataFrame <and>
(rowRDD: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row],schema: org.apache.spark.sql.types.StructType)org.apache.spark.sql.DataFrame
cannot be applied to (scala.collection.immutable.Iterable[org.apache.spark.sql.Row], org.apache.spark.sql.types.StructType)
我不知道为什么会得到这个,是因为Row
中的基础数据没有类型信息?
非常感谢任何帮助
答案 0 :(得分:0)
您必须parallelize
:
val sc: SparkContext = ???
val df = sqlContext.createDataFrame(sc.parallelize(events), schema)