使用模式指定Spark RDD到Dataframe

时间:2015-09-16 11:32:28

标签: apache-spark dataframe apache-spark-sql

从RDD到Row对象转换时,似乎spark不能为DataFrame应用模式(与String不同)。我在Spark 1.4和1.5版本上都试过了。

Snippet(Java API):

JavaPairInputDStream<String, String> directKafkaStream = KafkaUtils.createDirectStream(jssc, String.class, String.class,
                StringDecoder.class, StringDecoder.class, kafkaParams, topicsSet);

directKafkaStream.foreachRDD(rdd -> {
    rdd.foreach(x -> System.out.println("x._1() = " + x._1()));
    rdd.foreach(x -> System.out.println("x._2() = " + x._2()));

    JavaRDD<Row> rowRdd = rdd.map(x -> RowFactory.create(x._2().split("\t")));

    rowRdd.foreach(x -> System.out.println("x = " + x));

    SQLContext sqlContext = SQLContext.getOrCreate(rdd.context());

    StructField id = DataTypes.createStructField("id", DataTypes.IntegerType, true);
    StructField name = DataTypes.createStructField("name", DataTypes.StringType, true);
    List<StructField> fields = Arrays.asList(id, name);
    StructType schema = DataTypes.createStructType(fields);

    DataFrame sampleDf = sqlContext.createDataFrame(rowRdd, schema);

    sampleDf.printSchema();
    sampleDf.show();

    return null;
});

jssc.start();
jssc.awaitTermination();

如果为&#34; id&#34;指定DataTypes.StringType,它会产生以下输出;字段:

x._1() = null
x._2() = 1  item1
x = [1,item1]
root
 |-- id: string (nullable = true)
 |-- name: string (nullable = true)

+---+-----+
| id| name|
+---+-----+
|  1|item1|
+---+-----+

对于指定的代码,它会抛出错误:

x._1() = null
x._2() = 1  item1
x = [1,item1]
root
 |-- id: integer (nullable = true)
 |-- name: string (nullable = true)

15/09/16 04:13:33 ERROR JobScheduler: Error running job streaming job 1442402013000 ms.0
java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Integer
    at scala.runtime.BoxesRunTime.unboxToInt(BoxesRunTime.java:106)
    at org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow$class.getInt(rows.scala:40)
    at org.apache.spark.sql.catalyst.expressions.GenericInternalRow.getInt(rows.scala:220)
    at org.apache.spark.sql.catalyst.CatalystTypeConverters$IntConverter$.toScalaImpl(CatalystTypeConverters.scala:358)

类似问题出现在Spark Confluence上,但标记为已解决1.3版本。

1 个答案:

答案 0 :(得分:4)

您正在混合两种不同的东西 - 数据类型和DataFrame架构。当你像这样创建Key "campaignCode" string Value 100000 long Key "pricePerMonth" string Value 42 long 时:

Row

您获得RowFactory.create(x._2().split("\t")) ,但您的架构声明您拥有Row(_: String, _: String)。由于没有自动类型转换,您会收到错误。

要使其正常工作,您可以在创建行时投射值,也可以将Row(_: Integer, _: String)定义为id,并在创建数据框后使用StringType方法。