这里有类似的问题:How to add a schema to a Dataset in Spark?
然而,我面临的问题是我已经预定义了Dataset<Obj1>
,我想定义一个与其数据成员匹配的模式。最终目标是能够在两个java对象之间进行连接。
示例代码:
Dataset<Row> rowDataset = spark.getSpark().sqlContext().createDataFrame(rowRDD, schema).toDF();
Dataset<MyObj> objResult = rowDataset.map((MapFunction<Row, MyObj>) row ->
new MyObj(
row.getInt(row.fieldIndex("field1")),
row.isNullAt(row.fieldIndex("field2")) ? "" : row.getString(row.fieldIndex("field2")),
row.isNullAt(row.fieldIndex("field3")) ? "" : row.getString(row.fieldIndex("field3")),
row.isNullAt(row.fieldIndex("field4")) ? "" : row.getString(row.fieldIndex("field4"))
), Encoders.javaSerialization(MyObj.class));
如果我打印行数据集的架构,我会按预期获得架构:
rowDataset.printSchema();
root
|-- field1: integer (nullable = false)
|-- field2: string (nullable = false)
|-- field3: string (nullable = false)
|-- field4: string (nullable = false)
如果我打印对象数据集,我将丢失实际架构
objResult.printSchema();
root
|-- value: binary (nullable = true)
问题是我如何应用Dataset<MyObj>
?
答案 0 :(得分:1)
下面是代码片段,我试过并且火花按预期运行,似乎问题的根本原因是地图功能不是其他东西。
SparkSession session = SparkSession.builder().config(conf).getOrCreate();
Dataset<Row> ds = session.read().text("<some path>");
Encoder<Employee> employeeEncode = Encoders.bean(Employee.class);
ds.map(new MapFunction<Row, Employee>() {
@Override
public Employee call(Row value) throws Exception {
return new Employee(value.getString(0).split(","));
}
}, employeeEncode).printSchema();
<强>输出:强>
root
|-- age: integer (nullable = true)
|-- name: string (nullable = true)
//员工Bean
public class Employee {
public String name;
public Integer age;
public Employee(){
}
public Employee(String [] args){
this.name=args[0];
this.age=Integer.parseInt(args[1]);
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public Integer getAge() {
return age;
}
public void setAge(Integer age) {
this.age = age;
}
}