我有一个带有不同列的spark DataFrame。
tid | acct | bssn | name |
-----------------------------
1 | 123 | 111 | Peter
2 | 123 | 222 | Paul
3 | 456 | 333 | John
4 | 567 | 444 | Casey
我正在尝试比较account列的值(如果它们与bssn匹配),并且tid应该合并为一组。我如何执行此操作,以使生成的DataFrame如下所示:
acct | bssn | name |
-----------------------------
123 | (111,222) | (Peter,Paul)
456 | 333 | John
567 | 444 | Casey
答案 0 :(得分:0)
您可以尝试在列上进行左半联接。看起来像:
Dataset<Row> joinedDf = df
.join(
rightDf,
df.col("acct").equalTo(rightDf.col("acct2")),
"leftsemi")
.drop(rightDf.col("acct2"))
.drop(rightDf.col("name2"))
.drop(rightDf.col("bssn2"));
rightDf
与左边的df
类似:
Dataset<Row> rightDf = df
.withColumnRenamed("acct", "acct2")
.withColumnRenamed("bssn", "bssn2")
.withColumnRenamed("name", "name2")
.drop("tid");
并收集为列表。结果将是:
+----+------------------+------------------+
|acct|collect_list(bssn)|collect_list(name)|
+----+------------------+------------------+
|456 |[333] |[John] |
|567 |[444] |[Casey] |
|123 |[111, 222] |[Peter, Paul] |
+----+------------------+------------------+
这是整个代码:
package net.jgp.books.spark.ch12.lab990_others;
import static org.apache.spark.sql.functions.*;
import java.util.ArrayList;
import java.util.List;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType;
/**
* Self join.
*
* @author jgp
*/
public class SelfJoinApp {
/**
* main() is your entry point to the application.
*
* @param args
*/
public static void main(String[] args) {
SelfJoinApp app = new SelfJoinApp();
app.start();
}
/**
* The processing code.
*/
private void start() {
// Creates a session on a local master
SparkSession spark = SparkSession.builder()
.appName("Self join")
.master("local[*]")
.getOrCreate();
Dataset<Row> df = createDataframe(spark);
df.show(false);
Dataset<Row> rightDf = df
.withColumnRenamed("acct", "acct2")
.withColumnRenamed("bssn", "bssn2")
.withColumnRenamed("name", "name2")
.drop("tid");
Dataset<Row> joinedDf = df
.join(
rightDf,
df.col("acct").equalTo(rightDf.col("acct2")),
"leftsemi")
.drop(rightDf.col("acct2"))
.drop(rightDf.col("name2"))
.drop(rightDf.col("bssn2"));
joinedDf.show(false);
Dataset<Row> aggDf = joinedDf.groupBy(joinedDf.col("acct"))
.agg(collect_list("bssn"), collect_list("name"));
aggDf.show(false);
}
private static Dataset<Row> createDataframe(SparkSession spark) {
StructType schema = DataTypes.createStructType(new StructField[] {
DataTypes.createStructField(
"tid",
DataTypes.IntegerType,
false),
DataTypes.createStructField(
"acct",
DataTypes.IntegerType,
false),
DataTypes.createStructField(
"bssn",
DataTypes.IntegerType,
false),
DataTypes.createStructField(
"name",
DataTypes.StringType,
false) });
List<Row> rows = new ArrayList<>();
rows.add(RowFactory.create(1, 123, 111, "Peter"));
rows.add(RowFactory.create(2, 123, 222, "Paul"));
rows.add(RowFactory.create(3, 456, 333, "John"));
rows.add(RowFactory.create(4, 567, 444, "Casey"));
return spark.createDataFrame(rows, schema);
}
}
答案 1 :(得分:0)
可以使用“ GroupBy”和“ collect_set”:
val data = List(
(1, 123, 111, "Peter"),
(2, 123, 222, "Paul"),
(3, 456, 333, "John"),
(4, 567, 444, "Casey")
).toDF("tid", "acct", "bssn", "name")
val result = data.groupBy("acct").agg(collect_set("bssn"), collect_set("name"))
result.show(false)
输出:
+----+-----------------+-----------------+
|acct|collect_set(bssn)|collect_set(name)|
+----+-----------------+-----------------+
|123 |[222, 111] |[Paul, Peter] |
|567 |[444] |[Casey] |
|456 |[333] |[John] |
+----+-----------------+-----------------+
猜猜,可以轻松地在Java上翻译。