我想为spark数据框添加一列,其值为现有数据帧行的hashMod。在下面的示例中,我可以实现相同的1个特定列的散列"数据",如何在整个数据帧行(allcolumns)中实现相同的效果?
object Container {
case class intContainer(data: Int)
}
val sqlContext = new SQLContext(sc)
import sqlContext.implicits._
val getBucket = udf((data: Object) => data.hashCode() %10 )
val schema = StructType(List(StructField("age", IntegerType)))
val userList = List(( 23),( 24), (25), (57) )
val df1:RDD[Container.intContainer] = sc.parallelize(userList).map(x=> Container.intContainer(x))
val df = df1.toDF()
df.registerTempTable("dfcount")
val countdf = sqlContext.sql("select data,data+1 as count, current_timestamp() as time from dfcount")
val xx = countdf.withColumn("bucket_id", getBucket( col("data")))
答案 0 :(得分:1)
下面的代码片段使用了一个udf,它接受一个列数组,这些列的哈希码被求和以获取存储区值。此函数适用于任何任意数量的列和任何模式。
import sqlContext.implicits._
import org.apache.spark.sql.functions.udf
def hasher(data: AnyRef *) = (data.map(_.hashCode).sum % 10)
val getBucket = udf(hasher _)
val df = sc.parallelize(('a' to 'z').map(_.toString) zip (1 to 30)).toDF("c1","c2")
df.withColumn("bucket", getBucket(array(df.columns.map(df.apply): _*))).show()
+---+---+------+
| c1| c2|bucket|
+---+---+------+
| a| 1| 6|
| b| 2| 8|
| c| 3| 0|
| d| 4| 2|
| e| 5| 4|
| f| 6| 6|
| g| 7| 8|
| h| 8| 0|
| i| 9| 2|
| j| 10| 3|
| k| 11| 5|
| l| 12| 7|
| m| 13| 9|
| n| 14| 1|
| o| 15| 3|
| p| 16| 5|
| q| 17| 7|
| r| 18| 9|
| s| 19| 1|
| t| 20| 4|
+---+---+------+