PredictionIO培训失败

时间:2017-10-26 06:14:18

标签: apache-spark apache-spark-mllib predictionio

我是预测的新手。我正在使用模板https://github.com/EmergentOrder/template-scala-probabilistic-classifier-batch-lbfgs

我的训练数据集计数为1184603,具有约6500个功能。我使用的是ec2 r4.8xlarge系统(240 GB RAM,32核,200 GB交换)。

我尝试了两种培训方式

  1. 命令'
  2.   

    pio train - --driver-memory 120G --executor-memory 100G - conf   spark.network.timeout =千万

    ”   它在3-4小时后抛出异常。

    Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 15, localhost, executor driver): ExecutorLostFailure (executor driver exited caused by one of the running tasks) Reason: Executor heartbeat timed out after 181529 ms
    Driver stacktrace:
            at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
            at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
            at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
            at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
            at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
            at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
            at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
            at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
            at scala.Option.foreach(Option.scala:257)
            at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
            at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
            at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
            at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
            at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
            at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
            at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918)
            at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931)
            at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944)
            at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1353)
            at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
            at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
            at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
            at org.apache.spark.rdd.RDD.take(RDD.scala:1326)
            at org.example.classification.LogisticRegressionWithLBFGSAlgorithm.train(LogisticRegressionWithLBFGSAlgorithm.scala:28)
            at org.example.classification.LogisticRegressionWithLBFGSAlgorithm.train(LogisticRegressionWithLBFGSAlgorithm.scala:21)
            at org.apache.predictionio.controller.P2LAlgorithm.trainBase(P2LAlgorithm.scala:49)
            at org.apache.predictionio.controller.Engine$$anonfun$18.apply(Engine.scala:692)
            at org.apache.predictionio.controller.Engine$$anonfun$18.apply(Engine.scala:692)
            at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
            at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
            at scala.collection.immutable.List.foreach(List.scala:381)
            at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
            at scala.collection.immutable.List.map(List.scala:285)
            at org.apache.predictionio.controller.Engine$.train(Engine.scala:692)
            at org.apache.predictionio.controller.Engine.train(Engine.scala:177)
            at org.apache.predictionio.workflow.CoreWorkflow$.runTrain(CoreWorkflow.scala:67)
            at org.apache.predictionio.workflow.CreateWorkflow$.main(CreateWorkflow.scala:250)
            at org.apache.predictionio.workflow.CreateWorkflow.main(CreateWorkflow.scala)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:498)
            at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
            at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
            at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
            at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
            at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    
    1. 我启动了Spark独立集群,其中包含1个master和3个worker,并执行了以下命令:
    2.   

      pio train - --master spark:// :7077 - driver-memory 50G   --executor-memory 50G

      过了一段时间后,我收到以下错误:

        

      Executor无法与master连接,训练停止。

      我将功能计数从6500更改为> 500仍然是条件相同。所以有人可以建议我,我错过了什么吗?

      同时,在培训期间,该计划会持续发出警告,如:

        

      [WARN] [ScannerCallable]忽略,可能已经关闭

0 个答案:

没有答案