基于列

时间:2016-06-29 06:01:41

标签: apache-spark apache-spark-sql spark-dataframe

我正在通过另一个进程生成配对的rdd / df,但这里是生成数据集以帮助调试过程的代码。

以下是示例i / p文件(/scratch/test2.txt):     1 book1 author1 1.10     2 book2 author2 2.20     1 book3 author2 3.30

以下是生成数据帧的代码

case class RefText (index: Int,  description: String, fName: String, weight: Double)
val annotation_split = sc.textFile("/scratch/test2.txt").map(_.split("\t"))     
val annotation =  annotation_split.map{line => RefText(line(0).toInt, line(1), line(2), line(3).toDouble)}.toDF()
val getConcatenated = udf( (first: String, second: String, third: Double) => { first + "#" + second + "#" + third.toString} )
val annotate_concated =  annotation.withColumn("annotation",getConcatenated(col("description"), col("fName"), col("weight"))).select("index","annotation")

annotate_concated.show()
+-----+-----------------+
|index|       annotation|
+-----+-----------------+
|    1|book1#author1#1.1|
|    2|book2#author2#2.2|
|    1|book3#author2#3.3|
+-----+-----------------+

//Here is how I generate pairedrdd. 
val paired_rdd : PairRDDFunctions[String, String] = annotate_concated.rdd.map(row => (row.getString(0), row.getString(1)))
val df  = paired_rdd.reduceByKey { case (val1, val2) => val1 + "|" + val2 }.toDF("user_id","description")

以下是我的数据框的示例数据,列描述具有以下格式(text1#text2#weight | text1#text2#weight|....)

  

USER1   BOOK1#作者1#0.07841217886795074 |工具1#DESC1#1.27044260397331488 |松1#album1#-2.052661673730870676 | ITEM1#类别1#-0.005683148395350108

     

USER2   第二册#作者1#4.07841217886795074 | TOOL2#DESC1#-1.27044260397331488 | song2#album1#2.052661673730870676 | ITEM2#类别1#-0.005683148395350108

我想根据权重按降序对描述列进行排序。

所需的o / p是:

  

USER1   工具1#DESC1#1.27044260397331488 | BOOK1#作者1#0.07841217886795074 | ITEM1#类别1#-0.005683148395350108 |松1#album1#-2.052661673730870676

     

USER2   第二册#作者1#4.07841217886795074 | song2#album1#2.052661673730870676 | TOOL2#DESC1#-1.27044260397331488 | ITEM2#类别1#-0.005683148395350108

对此的任何帮助将不胜感激。

1 个答案:

答案 0 :(得分:0)

我认为没有一种直接的方法来重新排序单元格内的值。我会亲自预先订购,即在annotation_split rdd。

这是一个例子(我必须稍微更改代码才能使其工作)。 HDFS上的文件(使用常规空格和@作为分隔符):

1 book1 author1 1.10 @ 2 book2 author2 2.20 @ 1 book3 author2 3.30 

然后:

case class RefText (index: Int,  description: String, fName: String, weight: Double)
// split by line, then split line into columns
val annotation_split = sc.textFile(path).flatMap(_.split(" @ ")).map{_.split(" ")} 

// HERE IS THE TRICK: sort the lines in descending order
val annotation_sorted = annotation_split
    .map(line => (line.last.toFloat,line))
    .sortByKey(false)
    .map(_._2)

// back to your code
val annotation =  annotation_sorted.map{line => RefText(line(0).toInt, line(1), line(2), line(3).toDouble)}.toDF()
val getConcatenated = udf( (first: String, second: String, third: Double) => { first + "#" + second + "#" + third.toString} )
val annotate_concated =  annotation.withColumn("annotation",getConcatenated(col("description"), col("fName"), col("weight"))).select("index","annotation")
// note: here, I replaced row.getString(0) by row.getInt(0) to avoid cast exception
val paired_rdd = annotate_concated.rdd.map(row => (row.getInt(0), row.getString(1)))
val df  = paired_rdd.reduceByKey { case (val1, val2) => val1 + "|" + val2 }.toDF("user_id","description")

唯一的问题是,鉴于您的并行度,订购可能会在之后混淆。另一种方法是映射每一列并以排序的方式重写它(拆分,排序,连接)。