获取Spark数据集中嵌套数组的最小值

时间:2017-08-16 10:41:45

标签: java json apache-spark

我有一个JSON服务器日志文件,我想使用Spark 2.2.0和Java API进行解析,我使用

转换为数据集
Dataset<Row> df = spark.read().json(args[0]);

然后,它会生成以下架构:

df.printschema();

root
|-- timestamp: long (nullable = true)
|-- results: struct (nullable = true)
|    |-- entities: array (nullable = true)
|    |    |-- element: struct (containsNull = true)
|    |    |    |-- entity_id: string (nullable = true)
|    |    |    |-- score: long (nullable = true)
|    |    |    |-- is_available: boolean (nullable = true)
|    |-- number_of_results: long (nullable = true)

我想获得分数最低且可用的实体,因此我会得到类似于的数据集:

 root
 |-- timestamp: long (nullable = true)
 |-- results: struct (nullable = true)
 |    |-- entity: struct (containsNull = true)
 |    |    |-- entity_id: string (nullable = true)
 |    |    |-- score: long (nullable = true)
 |    |    |-- is_available: boolean (nullable = true)

我该如何进行这种转变?

2 个答案:

答案 0 :(得分:1)

您可以在阵列列上应用用户定义的函数:

// Define the UDF that takes the min of array
UDF1<Seq<Row>, Row> getElement = seq -> {
    Row bestRow = null;
    long bestRowScore = Long.MAX_VALUE;
    for (Row r : JavaConversions.seqAsJavaList(seq)){
        if (r.getBoolean(1) && r.getLong(2)<bestRowScore){
            bestRow = r;
            bestRowScore = r.getLong(2);
        }
    }
    return bestRow;
};

// Define the return type of UDF
ArrayType arrayType = (ArrayType) df.select(df.col("results.entities")).schema().fields()[0].dataType();
DataType elementType = arrayType.elementType();

// Register UDF
sparkSession.udf().register("getElement", getElement, elementType);

// Apply UDF on dataset
Dataset<Row> transformedDF = df.select(df.col("timestamp"),functions.callUDF("getElement", df.col("results.entities")));
transformedDF.printSchema();

答案 1 :(得分:0)

您可以使用窗口功能(例如行号)来实现:

df.registerTempTable("df");
val minPerEntityDF = spark.sql("SELECT *, row_number() over (partition by entity.entity_id order by score asc) as rn 
 FROM df")
.filter("rn = 1")