Spark如何在java中调用UDF over dataset

时间:2017-02-22 04:18:18

标签: java scala apache-spark

Java中下面的scala代码段的确切翻译是什么?

import org.apache.spark.sql.functions.udf 
def upper(s:String) : String ={
    s.toUpperCase
}
val toUpper = udf(upper _)
peopleDS.select(peopleDS(“name”),toUpper(peopledS(“name”))).show

请填写以下缺失的陈述例如。

import org.apache.spark.sql.api.java.UDF1;
UDF1 toUpper= new UDF1<String, String>() {
            public String call(final String str) throws Exception {
                return str.toUpperCase();  }};

peopleDS.select(peopleDS.col("name"), /*how to run  toUpper("name"))?????*/.show();

注意:注册udf然后使用selectExpr调用对我有效,但我需要上面显示的类似内容。

工作示例:

sqlContext.udf().register("toUpper",(String s)->s.toUpperCase(), DataTypes.StringType);
 peopleDF.selectExpr("toUpper(name)","name").show();

2 个答案:

答案 0 :(得分:4)

在java中调用UDF而不注册是不可能的。请检查https://bl.ocks.org/diggetybo/raw/6d35e5ecd17992650d0a23896e649b25/。以下是您的UDF。

private static UDF1 toUpper = new UDF1<String, String>() {
    public String call(final String str) throws Exception {
        return str.toUpperCase();
    }
};

注册UDF,您可以使用callUDF功能。

import static org.apache.spark.sql.functions.callUDF;
import static org.apache.spark.sql.functions.col;

sqlContext.udf().register("toUpper", toUpper, DataTypes.StringType);
peopleDF.select(col("name"),callUDF("toUpper", col("name"))).show();

答案 1 :(得分:0)

Input csv:

+-------+--------+------+
|   name| address|salary|
+-------+--------+------+
|   Arun|  Indore|     1|
|Shubham|  Indore|     2|
| Mukesh|Hariyana|     3|
|   Arun|  Bhopal|     4|
|Shubham|Jabalpur|     5|
| Mukesh|  Rohtak|     6|
+-------+--------+------+

import org.apache.spark.SparkConf;
import org.apache.spark.SparkContext;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.functions;
import org.apache.spark.sql.api.java.UDF1;
import org.apache.spark.sql.types.DataTypes;

public static void main(String[] args) {
        SparkConf sparkConf = new SparkConf().setAppName("test").setMaster("local");
        SparkSession sparkSession = new SparkSession(new SparkContext(sparkConf));

        Dataset<Row> dataset = sparkSession.read().option("header", "true")
                .csv("C:\\Users\\Desktop\\Spark\\user.csv");

        /**Create udf*/
        UDF1<String, String> toLower = new UDF1<String, String>() {
            @Override
            public String call(String str) throws Exception {
                return str.toLowerCase();
            }
        };

        /**Register udf*/
        sparkSession.udf().register("toLower", toLower, DataTypes.StringType);

        /**call udf using functions.callUDF method*/
        dataset.select(dataset.col("name"),dataset.col("salary"), 
        functions.callUDF("toLower",dataset.col("address")).alias("address")).show();

}

Output :
+-------+------+--------+
|   name|salary| address|
+-------+------+--------+
|   Arun|     1|  indore|
|Shubham|     2|  indore|
| Mukesh|     3|hariyana|
|   Arun|     4|  bhopal|
|Shubham|     5|jabalpur|
| Mukesh|     6|  rohtak|
+-------+------+--------+