如何在开发中使用Play Framework和Spark集群?
我可以运行任何Spark应用程序,并将master设置为local [*]
但如果我将它设置为在群集上运行,我会得到:
play.api.Application$$anon$1: Execution exception[[SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 5, 192.168.1.239): java.lang.ClassNotFoundException: controllers.Application$$anonfun$test$1$$anonfun$2
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1620)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1781)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
我理解问题是分布式工作人员没有加载我的应用类。
那么你如何在Lightbend Activator下使用Spark?通过命令行提交Play Framework应用程序没有任何意义,它应该在Play下运行,这样您就可以在浏览器中看到结果。
我下载了Lightbend示例Spark应用程序,他们使用本地[*]作为Spark Master。如果我切换到spark:// master:port url,它们都会遇到同样的问题。
有谁知道如何解决这个问题?提前谢谢。
答案 0 :(得分:2)
在“高级依赖关系管理”部分,它解释了主服务器如何将JAR分发给从属工作者。
从那里开始,就是在SparkContext上将--jars命令行选项转换为.addJar。
通过activator dist生成jar,它将在target / scala-2.version下,然后通过addJars添加到该文件的路径。
现在完美运作。
唯一的问题是,在开发过程中,Play会在您使用相同的JVM更改文件时重新启动应用程序,这会在一个JVM中生成两个上下文的Spark错误。因此,您需要重新启动应用程序才能测试更改。考虑到Play下Spark的强大功能,会产生轻微的滋扰。干杯!