根据条件在spark数据集中添加列值

时间:2018-12-18 13:37:42

标签: apache-spark apache-spark-dataset

public class EmployeeBean implements Serializable {

    private Long id;

    private String name;

    private Long salary;

    private Integer age;

    // getters and setters

}

相关火花代码:

SparkSession spark = SparkSession.builder().master("local[2]").appName("play-with-spark").getOrCreate();
List<EmployeeBean> employees1 = populateEmployees(1, 10);

Dataset<EmployeeBean> ds1 = spark.createDataset(employees1, Encoders.bean(EmployeeBean.class));
ds1.show();
ds1.printSchema();

Dataset<Row> ds2 = ds1.where("age is null").withColumn("is_age_null", lit(true));
Dataset<Row> ds3 = ds1.where("age is not null").withColumn("is_age_null", lit(false));

Dataset<Row> ds4 = ds2.union(ds3);
ds4.show();

相关输出:

ds1

+----+---+----+------+
| age| id|name|salary|
+----+---+----+------+
|null|  1|dev1| 11000|
|   2|  2|dev2| 12000|
|null|  3|dev3| 13000|
|   4|  4|dev4| 14000|
|null|  5|dev5| 15000|
+----+---+----+------+

ds4

+----+---+----+------+-----------+
| age| id|name|salary|is_age_null|
+----+---+----+------+-----------+
|null|  1|dev1| 11000|       true|
|null|  3|dev3| 13000|       true|
|null|  5|dev5| 15000|       true|
|   2|  2|dev2| 12000|      false|
|   4|  4|dev4| 14000|      false|
+----+---+----+------+-----------+

有没有更好的解决方案,可以将此列添加到数据集中,而不是创建两个数据集并执行并集?

1 个答案:

答案 0 :(得分:0)

可以使用when otherwisewithColumn完成相同操作。

ds1.withColumn("is_age_null" , when(col("age") === "null", lit(true)).otherWise(lit(false))).show()

这将得到与ds4相同的结果。