Scala:从每一行传递数据帧的元素,并在单独的行中返回结果

时间:2018-09-28 13:12:32

标签: scala dataframe

在我的要求中,我遇到一种情况,我必须从数据框的2列中传递2个字符串,然后将结果取回字符串并想将其存储回数据框。 现在,在将值作为字符串传递时,它总是返回相同的值。因此,在所有行中都填充了相同的值。 (在我的情况下,PPPP填充在所有行中)

是否有一种方法可以传递每行中的元素(用于那两列),并在单独的行中获取结果。 我已经准备好修改函数以接受Dataframe并返回Dataframe或接受arrayOfString并返回ArrayOfString,但是由于我是编程新手,所以我不知道该怎么做。有人可以帮帮我吗。 谢谢。

def myFunction(key: String , value :String ) : String =   {

  //Do my functions and get back a string value2 and return this value2 string
value2

}



val DF2 = DF1.select (
  DF1("col1")
 ,DF1("col2")
 ,DF1("col5")    )
 .withColumn("anyName", lit(myFunction ( DF1("col3").toString()  , DF1("col4").toString() )))




/* DF1:

/*+-----+-----+----------------+------+
/*|col1 |col2 |col3     | col4 | col 5|
/*+-----+-----+----------------+------+
/*|Hello|5    |valueAAA | XXX  | 123  |
/*|How  |3    |valueCCC | YYY  | 111  |
/*|World|5    |valueDDD | ZZZ  | 222  |
/*+-----+-----+----------------+------+


/*DF2:

/*+-----+-----+--------------+
/*|col1 |col2 |col5| anyName |
/*+-----+-----+--------------+
/*|Hello|5    |123 | PPPPP   |
/*|How  |3    |111 | PPPPP   |
/*|World|5    |222 | PPPPP   |
/*+-----+-----+--------------+ 
*/

1 个答案:

答案 0 :(得分:0)

定义函数后,需要将它们注册为udf()。 udf()函数在org.apache.spark.sql.functions中可用。检查一下

scala> val DF1 = Seq(("Hello",5,"valueAAA","XXX",123),
     | ("How",3,"valueCCC","YYY",111),
     | ("World",5,"valueDDD","ZZZ",222)
     | ).toDF("col1","col2","col3","col4","col5")
DF1: org.apache.spark.sql.DataFrame = [col1: string, col2: int ... 3 more fields]

scala> val DF2 = DF1.select (  DF1("col1") ,DF1("col2") ,DF1("col5")    )
DF2: org.apache.spark.sql.DataFrame = [col1: string, col2: int ... 1 more field]

scala> DF2.show(false)
+-----+----+----+
|col1 |col2|col5|
+-----+----+----+
|Hello|5   |123 |
|How  |3   |111 |
|World|5   |222 |
+-----+----+----+


scala> DF1.select("*").show(false)
+-----+----+--------+----+----+
|col1 |col2|col3    |col4|col5|
+-----+----+--------+----+----+
|Hello|5   |valueAAA|XXX |123 |
|How  |3   |valueCCC|YYY |111 |
|World|5   |valueDDD|ZZZ |222 |
+-----+----+--------+----+----+

scala> def myConcat(a:String,b:String):String=
     | return a + "--" + b
myConcat: (a: String, b: String)String

scala> 
scala> import org.apache.spark.sql.functions._
import org.apache.spark.sql.functions._

scala> val myConcatUDF = udf(myConcat(_:String,_:String):String)
myConcatUDF: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function2>,StringType,Some(List(StringType, StringType)))

scala> DF1.select (  DF1("col1") ,DF1("col2") ,DF1("col5"), myConcatUDF( DF1("col3"), DF1("col4"))).show()
+-----+----+----+---------------+
| col1|col2|col5|UDF(col3, col4)|
+-----+----+----+---------------+
|Hello|   5| 123|  valueAAA--XXX|
|  How|   3| 111|  valueCCC--YYY|
|World|   5| 222|  valueDDD--ZZZ|
+-----+----+----+---------------+


scala>