我正在尝试解决向数据集添加序列号的古老问题。我正在使用DataFrames,似乎没有与RDD.zipWithIndex
等效的DataFrame。另一方面,以下工作或多或少地按照我希望的方式工作:
val origDF = sqlContext.load(...)
val seqDF= sqlContext.createDataFrame(
origDF.rdd.zipWithIndex.map(ln => Row.fromSeq(Seq(ln._2) ++ ln._1.toSeq)),
StructType(Array(StructField("seq", LongType, false)) ++ origDF.schema.fields)
)
在我的实际应用程序中,origDF不会直接从文件中加载 - 它将通过将2-3个其他DataFrame连接在一起来创建,并且将包含超过1亿行。
有更好的方法吗?我该怎么做才能优化它?
答案 0 :(得分:32)
以下内容是代表David Griffin发布的(无法修改)。
全能的,全舞蹈的dfZipWithIndex方法。您可以设置起始偏移量(默认为1),索引列名称(默认为" id"),并将列放在前面或后面:
import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.types.{LongType, StructField, StructType}
import org.apache.spark.sql.Row
def dfZipWithIndex(
df: DataFrame,
offset: Int = 1,
colName: String = "id",
inFront: Boolean = true
) : DataFrame = {
df.sqlContext.createDataFrame(
df.rdd.zipWithIndex.map(ln =>
Row.fromSeq(
(if (inFront) Seq(ln._2 + offset) else Seq())
++ ln._1.toSeq ++
(if (inFront) Seq() else Seq(ln._2 + offset))
)
),
StructType(
(if (inFront) Array(StructField(colName,LongType,false)) else Array[StructField]())
++ df.schema.fields ++
(if (inFront) Array[StructField]() else Array(StructField(colName,LongType,false)))
)
)
}
答案 1 :(得分:10)
自Spark 1.6起,有一个名为 monotonically_increasing_id()的函数
它为每行生成一个具有唯一64位单调索引的新列
但这并不重要,每个分区都会启动一个新范围,因此我们必须在使用之前计算每个分区偏移量。
试图提供一个免费的" rdd-free"解决方案,我最终得到了一些collect(),但它只收集偏移量,每个分区一个值,所以它不会导致OOM
def zipWithIndex(df: DataFrame, offset: Long = 1, indexName: String = "index") = {
val dfWithPartitionId = df.withColumn("partition_id", spark_partition_id()).withColumn("inc_id", monotonically_increasing_id())
val partitionOffsets = dfWithPartitionId
.groupBy("partition_id")
.agg(count(lit(1)) as "cnt", first("inc_id") as "inc_id")
.orderBy("partition_id")
.select(sum("cnt").over(Window.orderBy("partition_id")) - col("cnt") - col("inc_id") + lit(offset) as "cnt" )
.collect()
.map(_.getLong(0))
.toArray
dfWithPartitionId
.withColumn("partition_offset", udf((partitionId: Int) => partitionOffsets(partitionId), LongType)(col("partition_id")))
.withColumn(indexName, col("partition_offset") + col("inc_id"))
.drop("partition_id", "partition_offset", "inc_id")
}
此解决方案不会重新打包原始行,也不会对原始庞大的数据帧进行重新分区,因此在现实世界中它非常快:
200GB的CSV数据(4300万行,150列)在240个核心上在2分钟内读取,索引并打包到镶木地板上
在测试我的解决方案之后,我已经运行Kirk Broadhurst's solution并且慢了20秒
您可能想要或不想使用dfWithPartitionId.cache()
,取决于任务
答案 2 :(得分:7)
从Spark 1.5开始,Window
表达式被添加到Spark中。您现在可以使用DataFrame
,而不必将RDD
转换为org.apache.spark.sql.expressions.row_number
。请注意,我发现上述dfZipWithIndex
的效果明显快于以下算法。但我发布它是因为:
无论如何,这对我有用:
import org.apache.spark.sql.expressions._
df.withColumn("row_num", row_number.over(Window.partitionBy(lit(1)).orderBy(lit(1))))
请注意,我使用lit(1)
进行分区和排序 - 这使得所有内容都在同一个分区中,并且似乎保留了DataFrame
的原始顺序,但我认为它是什么减缓了它。
我在一个包含7,000,000行的4列DataFrame
上进行了测试,这与上面dfZipWithIndex
之间的速度差异很大(就像我说的那样,RDD
函数很多,快得多。)
答案 3 :(得分:3)
PySpark版本:
from pyspark.sql.types import LongType, StructField, StructType
def dfZipWithIndex (df, offset=1, colName="rowId"):
'''
Enumerates dataframe rows is native order, like rdd.ZipWithIndex(), but on a dataframe
and preserves a schema
:param df: source dataframe
:param offset: adjustment to zipWithIndex()'s index
:param colName: name of the index column
'''
new_schema = StructType(
[StructField(colName,LongType(),True)] # new added field in front
+ df.schema.fields # previous schema
)
zipped_rdd = df.rdd.zipWithIndex()
new_rdd = zipped_rdd.map(lambda (row,rowId): ([rowId +offset] + list(row)))
return spark.createDataFrame(new_rdd, new_schema)
还创建了一个jira,以便本机地在Spark中添加此功能:https://issues.apache.org/jira/browse/SPARK-23074
答案 4 :(得分:2)
Spark Java API版本:
我已经实现了@Evgeny的solution,用于在Java中对DataFrames执行zipWithIndex,并希望共享代码。
它还包含@fylb在其solution中提供的改进。我可以为Spark 2.4确认,当spark_partition_id()返回的条目不是以0开头或没有顺序增加时,执行将失败。由于此函数documented是不确定的,因此很可能发生上述情况之一。增加分区数触发了一个例子。
Java实现如下:
public static Dataset<Row> zipWithIndex(Dataset<Row> df, Long offset, String indexName) {
Dataset<Row> dfWithPartitionId = df
.withColumn("partition_id", spark_partition_id())
.withColumn("inc_id", monotonically_increasing_id());
Object partitionOffsetsObject = dfWithPartitionId
.groupBy("partition_id")
.agg(count(lit(1)).alias("cnt"), first("inc_id").alias("inc_id"))
.orderBy("partition_id")
.select(col("partition_id"), sum("cnt").over(Window.orderBy("partition_id")).minus(col("cnt")).minus(col("inc_id")).plus(lit(offset).alias("cnt")))
.collect();
Row[] partitionOffsetsArray = ((Row[]) partitionOffsetsObject);
Map<Integer, Long> partitionOffsets = new HashMap<>();
for (int i = 0; i < partitionOffsetsArray.length; i++) {
partitionOffsets.put(partitionOffsetsArray[i].getInt(0), partitionOffsetsArray[i].getLong(1));
}
UserDefinedFunction getPartitionOffset = udf(
(partitionId) -> partitionOffsets.get((Integer) partitionId), DataTypes.LongType
);
return dfWithPartitionId
.withColumn("partition_offset", getPartitionOffset.apply(col("partition_id")))
.withColumn(indexName, col("partition_offset").plus(col("inc_id")))
.drop("partition_id", "partition_offset", "inc_id");
}
答案 5 :(得分:1)
@Evgeny,your solution很有意思。请注意,当您有空分区时(存在缺少这些分区索引的数组,至少这种情况发生在我的火花1.6中),就会出现错误,所以我将数组转换为Map(partitionId - &gt;偏移)。
另外,我从monotonically_increasing_id的来源中取出了每个分区中从0开始的“inc_id”。
以下是更新版本:
import org.apache.spark.sql.catalyst.expressions.LeafExpression
import org.apache.spark.sql.catalyst.InternalRow
import org.apache.spark.sql.types.LongType
import org.apache.spark.sql.catalyst.expressions.Nondeterministic
import org.apache.spark.sql.catalyst.expressions.codegen.GeneratedExpressionCode
import org.apache.spark.sql.catalyst.expressions.codegen.CodeGenContext
import org.apache.spark.sql.types.DataType
import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.functions._
import org.apache.spark.sql.Column
import org.apache.spark.sql.expressions.Window
case class PartitionMonotonicallyIncreasingID() extends LeafExpression with Nondeterministic {
/**
* From org.apache.spark.sql.catalyst.expressions.MonotonicallyIncreasingID
*
* Record ID within each partition. By being transient, count's value is reset to 0 every time
* we serialize and deserialize and initialize it.
*/
@transient private[this] var count: Long = _
override protected def initInternal(): Unit = {
count = 1L // notice this starts at 1, not 0 as in org.apache.spark.sql.catalyst.expressions.MonotonicallyIncreasingID
}
override def nullable: Boolean = false
override def dataType: DataType = LongType
override protected def evalInternal(input: InternalRow): Long = {
val currentCount = count
count += 1
currentCount
}
override def genCode(ctx: CodeGenContext, ev: GeneratedExpressionCode): String = {
val countTerm = ctx.freshName("count")
ctx.addMutableState(ctx.JAVA_LONG, countTerm, s"$countTerm = 1L;")
ev.isNull = "false"
s"""
final ${ctx.javaType(dataType)} ${ev.value} = $countTerm;
$countTerm++;
"""
}
}
object DataframeUtils {
def zipWithIndex(df: DataFrame, offset: Long = 0, indexName: String = "index") = {
// from https://stackoverflow.com/questions/30304810/dataframe-ified-zipwithindex)
val dfWithPartitionId = df.withColumn("partition_id", spark_partition_id()).withColumn("inc_id", new Column(PartitionMonotonicallyIncreasingID()))
// collect each partition size, create the offset pages
val partitionOffsets: Map[Int, Long] = dfWithPartitionId
.groupBy("partition_id")
.agg(max("inc_id") as "cnt") // in each partition, count(inc_id) is equal to max(inc_id) (I don't know which one would be faster)
.select(col("partition_id"), sum("cnt").over(Window.orderBy("partition_id")) - col("cnt") + lit(offset) as "cnt")
.collect()
.map(r => (r.getInt(0) -> r.getLong(1)))
.toMap
def partition_offset(partitionId: Int): Long = partitionOffsets(partitionId)
val partition_offset_udf = udf(partition_offset _)
// and re-number the index
dfWithPartitionId
.withColumn("partition_offset", partition_offset_udf(col("partition_id")))
.withColumn(indexName, col("partition_offset") + col("inc_id"))
.drop("partition_id")
.drop("partition_offset")
.drop("inc_id")
}
}
答案 6 :(得分:1)
我已将@Tagar的版本修改为可在Python 3.7上运行,希望与大家分享:
def dfZipWithIndex (df, offset=1, colName="rowId"):
'''
Enumerates dataframe rows is native order, like rdd.ZipWithIndex(), but on a dataframe
and preserves a schema
:param df: source dataframe
:param offset: adjustment to zipWithIndex()'s index
:param colName: name of the index column
'''
new_schema = StructType(
[StructField(colName,LongType(),True)] # new added field in front
+ df.schema.fields # previous schema
)
zipped_rdd = df.rdd.zipWithIndex()
new_rdd = zipped_rdd.map(lambda args: ([args[1] + offset] + list(args[0]))) # use this for python 3+, tuple gets passed as single argument so using args and [] notation to read elements within args
return spark.createDataFrame(new_rdd, new_schema)
答案 7 :(得分:1)
这是我的建议,其优点是:
DataFrame
的{{1}}的任何序列化/反序列化 [1] 。InternalRow
来简化逻辑。主要缺点是:
RDD.zipWithIndex
下。进口:
package org.apache.spark.sql;
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.catalyst.InternalRow
import org.apache.spark.sql.execution.LogicalRDD
import org.apache.spark.sql.functions.lit
[1] :(从/到/**
* Optimized Spark SQL equivalent of RDD.zipWithIndex.
*
* @param df
* @param indexColName
* @return `df` with a column named `indexColName` of consecutive unique ids.
*/
def zipWithIndex(df: DataFrame, indexColName: String = "index"): DataFrame = {
import df.sparkSession.implicits._
val dfWithIndexCol: DataFrame = df
.drop(indexColName)
.select(lit(0L).as(indexColName), $"*")
val internalRows: RDD[InternalRow] = dfWithIndexCol
.queryExecution
.toRdd
.zipWithIndex()
.map {
case (internalRow: InternalRow, index: Long) =>
internalRow.setLong(0, index)
internalRow
}
Dataset.ofRows(
df.sparkSession,
LogicalRDD(dfWithIndexCol.schema.toAttributes, internalRows)(df.sparkSession)
)
的基础字节数组<-> InternalRow
的基础JVM对象集合GenericRow
)。