使用PredictionIO训练引擎时出现StackOverflowError

时间:2017-02-27 09:36:47

标签: java stack-overflow predictionio

我已按照http://predictionio.incubator.apache.org/templates/recommendation/quickstart/

上的快速入门指南进行操作

已安装的PredictionIO。 EventServer运行正常。 新App成功创建。 GET和POST请求工作正常。 导入样本数据工作正常。

pio build

工作得很好。 但是当我试图运行时

pio train

我收到此错误

[INFO] [Engine] Extracting datasource params...
[INFO] [WorkflowUtils$] No 'name' is found. Default empty String will be used.
[INFO] [Engine] Datasource params: (,DataSourceParams(MyApp1,None))
[INFO] [Engine] Extracting preparator params...
[INFO] [Engine] Preparator params: (,Empty)
[INFO] [Engine] Extracting serving params...
[INFO] [Engine] Serving params: (,Empty)
[WARN] [Utils] Your hostname, damiano-asus resolves to a loopback address: 127.0.0.1; using 10.0.10.150 instead (on interface wlp3s0)
[WARN] [Utils] Set SPARK_LOCAL_IP if you need to bind to another address
[INFO] [Remoting] Starting remoting
[INFO] [Remoting] Remoting started; listening on addresses :[akka.tcp://sparkDriver@10.0.10.150:33231]
[WARN] [MetricsSystem] Using default name DAGScheduler for source because spark.app.id is not set.
[INFO] [Engine$] EngineWorkflow.train
[INFO] [Engine$] DataSource: damiano.company.DataSource@483b0690
[INFO] [Engine$] Preparator: damiano.company.Preparator@fb0a08c
[INFO] [Engine$] AlgorithmList: List(damiano.company.ALSAlgorithm@6a5e167a)
[INFO] [Engine$] Data sanity check is on.
[INFO] [Engine$] damiano.company.TrainingData does not support data sanity check. Skipping check.
[INFO] [Engine$] damiano.company.PreparedData does not support data sanity check. Skipping check.
[WARN] [BLAS] Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS
[WARN] [BLAS] Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS
[WARN] [LAPACK] Failed to load implementation from: com.github.fommil.netlib.NativeSystemLAPACK
[WARN] [LAPACK] Failed to load implementation from: com.github.fommil.netlib.NativeRefLAPACK
[Stage 29:>                                                         (0 + 0) / 4][ERROR] [Executor] Exception in task 1.0 in stage 40.0 (TID 137)
[ERROR] [Executor] Exception in task 0.0 in stage 40.0 (TID 136)
[ERROR] [Executor] Exception in task 3.0 in stage 40.0 (TID 139)
[ERROR] [Executor] Exception in task 2.0 in stage 40.0 (TID 138)
[WARN] [TaskSetManager] Lost task 1.0 in stage 40.0 (TID 137, localhost):     java.lang.StackOverflowError
at     java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2901)
at     java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1700)

我猜这是由于jvm堆/堆栈维度?有谁知道怎么解决这个问题? 谢谢

2 个答案:

答案 0 :(得分:2)

有一个快速解决问题的办法 您可以编辑engine.json文件以降低迭代次数 例如:"numIterations": 10,而不是20

但如果您需要增加内存,可以按照快速入门指南页底部的常见问题解答进行操作

http://predictionio.incubator.apache.org/resources/faq/#engine-training

使用./sbin/start-master.sh启动火花大师 (如果您按照指南spark在.../vendors/spark-1.5.1-bin-hadoop2.6/文件夹中)
./sbin/start-slave.sh spark://<yourlocalhost>:7077发起火花奴隶 您应该能够看到所有正在http://localhost:8080

运行

然后你可以运行
pio train -- --master spark://localhost:7077 --driver-memory 16G --executor-memory 24G
我可以在笔记本电脑上使用--driver-memory 3G --executor-memory 4G进行20次迭代

如果仍有问题,请尝试配置spark-default.conf文件以满足您的需求 我还必须在.conf文件的开头添加一行 spark.mesos.coarse true

答案 1 :(得分:0)

将你的spark版本升级到spark-1.6.3-bin-hadoop2.6。我遇到了类似的问题,升级到提到的版本解决了我的问题。