关于LR预测错误的Spark SQL

时间:2018-05-04 06:00:42

标签: apache-spark pyspark apache-spark-mllib

我对数据帧“Preds”运行以下Jupyter笔记本查询作为预测结果的简化DF:

我成功地对“label”进行简单查询,但不是“预测”(即使对于相同的查询)但是对于复杂查询失败了。我怀疑两类MLlib线性回归的输出字段“预测”可能会在可能的类型转换过程中引起问题:

(但我不能想到为什么它调用字符串来转换,除非从输入屏幕转换)

-----+----------+
|label|prediction|
+-----+----------+
|  1.0|       1.0|
|  1.0|       1.0|
|  1.0|       1.0|
..................
..................

root
 |-- label: double (nullable = true)
 |-- prediction: double (nullable = true)


%%sql 

Select 
case 
    when label = 1.0 and prediction = 1.0 then 'True Positive' 
    when label = 0.0 and prediction = 0.0 then 'True Negative' 
    when label = 0.0 and prediction = 1.0 then 'False Positive' 
    when label = 1.0 and prediction = 0.0 then 'False Negatives'
    else 'Unknown' end 
    as Cases 
from Preds

**看起来像以下问题来自==>无法执行用户定义的函数($ anonfun $ 4:(string)=> double)“**

冗长错误日志:

 An error was encountered:
 An error occurred while calling 
 z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 
in stage 179.0 failed 4 times, most recent failure: Lost task 0.3 in stage 
179.0 (TID 3748, wn0- 
abrshd.s2yinkedijvevogpqsbgf14b1h.hx.internal.cloudapp.net, executor 2): 
org.apache.spark.SparkException: Failed to execute user defined 
function($anonfun$4: (string) => double)
at 
    org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
at org.apache.spark.sql.Dataset$$anonfun$56$$anon$1.hasNext(Dataset.scala:2712)
at org.apache.spark.sql.Dataset$$anonfun$56$$anon$1.next(Dataset.scala:2718)
at org.apache.spark.sql.Dataset$$anonfun$56$$anon$1.next(Dataset.scala:2711)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:504)
at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:328)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1963)
at org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:269)
Caused by: org.apache.spark.SparkException: Unseen label: video store.
at org.apache.spark.ml.feature.StringIndexerModel$$anonfun$4.apply(StringIndexer.scala:170)
at org.apache.spark.ml.feature.StringIndexerModel$$anonfun$4.apply(StringIndexer.scala:166)
... 14 more

欣赏任何提示或评论?

0 个答案:

没有答案