我在Spark群集上使用h2o模型(以mojo格式)时遇到问题,但只有当我尝试并行运行时,才会使用collect
并在驱动程序上运行它。
由于我预测的数据帧有> 100个功能,我使用以下函数将数据帧行转换为h2o的RowData格式(来自here):
def rowToRowData(df: DataFrame, row: Row): RowData = {
val rowAsMap = row.getValuesMap[Any](df.schema.fieldNames)
val rowData = rowAsMap.foldLeft(new RowData()) { case (rd, (k,v)) =>
if (v != null) { rd.put(k, v.toString) }
rd
}
rowData
}
然后,我导入mojo模型并创建一个easyPredictModel包装器
val mojo = MojoModel.load("/path/to/mojo.zip")
val easyModel = new EasyPredictModelWrapper(mojo)
现在,如果先收集行,我可以通过映射行来对我的数据帧(df
)进行预测,以便以下工作:
val predictions = df.collect().map { r =>
val rData = rowToRowData(df, r) . // convert row to RowData using function
val prediction = easyModel.predictBinomial(rData).label
(r.getAs[String]("id"), prediction.toInt)
}
.toSeq
.toDF("id", "prediction")
但是,我希望在群集上并行执行此操作,因为最终的df太大而无法在驱动程序上收集。但是,如果我尝试运行相同的代码而不先收集:
val predictions = df.map { r =>
val rData = rowToRowData(df, r)
val prediction = easyModel.predictBinomial(rData).label
(r.getAs[String]("id"), prediction.toInt)
}
.toDF("id", "prediction")
我收到以下错误:
18/01/03 11:34:59 WARN TaskSetManager: Lost task 0.0 in stage 118.0 (TID 9914, 213.248.241.182, executor 0): java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2133)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1305)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2024)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:80)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
所以它看起来像数据类型不匹配。我尝试先将数据帧转换为rdd(即df.rdd.map
,但得到相同的错误),执行df.mapPartition
,或将rowToData
函数代码放在地图中,但没有任何工作至今。
关于实现这一目标的最佳方法的任何想法?
答案 0 :(得分:0)
我发现一些凌乱的Spark票证https://issues.apache.org/jira/browse/SPARK-18075描述了与提交Spark应用程序的不同方式相关的相同问题。看看,也许它会给你一个关于你的问题的线索。
答案 1 :(得分:0)
您无法调用prediction.toInt。返回的预测是一个元组。您需要提取该元组的第二个元素以获得级别1的实际分数。我在此处有一个完整的示例:https://stackoverflow.com/a/47898040/9120484