在UDF中火花创建数据框

时间:2018-11-30 03:54:30

标签: scala apache-spark apache-spark-sql user-defined-functions

我有一个示例,想在UDF中创建数据框。类似于下面的那个

import org.apache.spark.ml.classification.LogisticRegressionModel
import org.apache.spark.ml.linalg.Vector
import org.apache.spark.ml.feature.VectorAssembler

数据到数据框

    val df = Seq((1,1,34,23,34,56),(2,1,56,34,56,23),(3,0,34,23,23,78),(4,0,23,34,78,23),(5,1,56,23,23,12),
(6,1,67,34,56,34),(7,0,23,23,23,56),(8,0,12,34,45,89),(9,1,12,34,12,34),(10,0,12,34,23,34)).toDF("id","label","tag1","tag2","tag3","tag4")
    val assemblerDF = new VectorAssembler().setInputCols(Array("tag1", "tag2", "tag3","tag4")).setOutputCol("features")
    val data = assemblerDF.transform(df)
    val Array(train,test) = data.randomSplit(Array(0.6, 0.4), seed = 11L)
    val testData=test.toDF    

    val loadmodel=LogisticRegressionModel.load("/user/xu/savemodel")
    sc.broadcast(loadmodel)
    val assemblerFe = new VectorAssembler().setInputCols(Array("a", "b", "c","d")).setOutputCol("features")
    sc.broadcast(assemblerFe)

UDF

    def predict(predictSet:Vector):Double={
        val set=Seq((1,2,3,4)).toDF("a","b","c","d")
        val predata = assemblerFe.transform(set)
        val result=loadmodel.transform(predata)
        result.rdd.take(1)(0)(3).toString.toDouble}

    spark.udf.register("predict", predict _)
    testData.registerTempTable("datatable")
    spark.sql("SELECT predict(features) FROM datatable").take(1)

我收到类似错误

ERROR Executor: Exception in task 3.0 in stage 4.0 (TID 7) [Executor task launch worker for task 7]
org.apache.spark.SparkException: Failed to execute user defined function($anonfun$1: (vector) => double)

WARN TaskSetManager: Lost task 3.0 in stage 4.0 (TID 7, localhost, executor driver): org.apache.spark.SparkException: Failed to execute user defined function($anonfun$1: (vector) => double)

不支持数据框吗?我正在使用Spark 2.3.0和Scala 2.11。谢谢

1 个答案:

答案 0 :(得分:1)

如评论中所述,您无需在此处使用UDF即可将Trained模型应用于测试数据。您可以按如下所示将模型应用于主程序中的测试数据框:

val df = Seq((1,1,34,23,34,56),(2,1,56,34,56,23),(3,0,34,23,23,78),(4,0,23,34,78,23),(5,1,56,23,23,12),
(6,1,67,34,56,34),(7,0,23,23,23,56),(8,0,12,34,45,89),(9,1,12,34,12,34),(10,0,12,34,23,34)).toDF("id","label","tag1","tag2","tag3","tag4")
val assemblerDF = new VectorAssembler().setInputCols(Array("tag1", "tag2", "tag3","tag4")).setOutputCol("features")
val data = assemblerDF.transform(df)
val Array(train,test) = data.randomSplit(Array(0.6, 0.4), seed = 11L)
val testData=test.toDF    

val loadmodel=LogisticRegressionModel.load("/user/xu/savemodel")
sc.broadcast(loadmodel)
val assemblerFe = new VectorAssembler().setInputCols(Array("a", "b", "c","d")).setOutputCol("features")
sc.broadcast(assemblerFe)


val set=Seq((1,2,3,4)).toDF("a","b","c","d")
val predata = assemblerFe.transform(set)
val result=loadmodel.transform(predata) // Applying model on predata dataframe. You can apply model on any DataFrame.

result现在是一个DataFrame,您可以将DataFrame重新注册为表格并使用SQL查询ForecastLabel和功能,也可以直接从DataFrame中选择ForecastLabel和其他字段。

请注意,UDF是Spark SQL的一项功能,用于定义基于列的新功能,这些功能扩展了Spark SQL DSL的词汇表,可用于转换数据集。它不会将DataFrame本身作为返回类型返回。并且通常不建议除非必要使用UDF,请参考:https://jaceklaskowski.gitbooks.io/mastering-spark-sql/spark-sql-udfs-blackbox.html