Spark2 - LogisticRegression训练已完成,但结果未收敛,因为:行搜索失败

时间:2016-08-16 18:14:11

标签: scala apache-spark apache-spark-mllib logistic-regression

在训练Logistic回归分类器时,我收到以下错误:

2016-08-16 20:50:23,833 ERROR [main] optimize.LBFGS (Logger.scala:error(27)) - Failure! Resetting history: breeze.optimize.FirstOrderException: Line search zoom failed
2016-08-16 20:50:24,009 INFO  [main] optimize.StrongWolfeLineSearch (Logger.scala:info(11)) - Line search t: 0.9 fval: 0.4515497761131565 rhs: 0.45154977611314895 cdd: 3.4166889881493167E-16

然后程序继续一段时间,但后来我遇到了这个错误:

2016-08-16 20:50:24,365 ERROR [main] optimize.LBFGS (Logger.scala:error(27)) - Failure again! Giving up and returning. Maybe the objective is just poorly behaved?
2016-08-16 20:50:24,367 WARN  [main] classification.LogisticRegression (Logging.scala:logWarning(66)) - LogisticRegression training finished but the result is not converged because: line search failed!
2016-08-16 20:50:27,143 INFO  [main] optimize.StrongWolfeLineSearch (Logger.scala:info(11)) - Line search t: 0.4496001808762097 fval: 0.5641490068577 rhs: 0.6931115872739131 cdd: 0.01924752705390458
2016-08-16 20:50:27,143 INFO  [main] optimize.LBFGS (Logger.scala:info(11)) - Step Size: 0.4496
2016-08-16 20:50:27,144 INFO  [main] optimize.LBFGS (Logger.scala:info(11)) - Val and Grad Norm: 0.564149 (rel: 0.186) 0.622296
2016-08-16 20:50:27,181 INFO  [main] optimize.LBFGS (Logger.scala:info(11)) - Step Size: 1.000
2016-08-16 20:50:27,181 INFO  [main] optimize.LBFGS (Logger.scala:info(11)) - Val and Grad Norm: 0.484949 (rel: 0.140) 0.285684
2016-08-16 20:50:27,226 INFO  [main] optimize.LBFGS (Logger.scala:info(11)) - Step Size: 1.000
2016-08-16 20:50:27,226 INFO  [main] optimize.LBFGS (Logger.scala:info(11)) - Val and Grad Norm: 0.458425 (rel: 0.0547) 0.0789000
2016-08-16 20:50:27,263 INFO  [main] optimize.LBFGS (Logger.scala:info(11)) - Step Size: 1.000

然后培训继续进行。

即使看起来培训成功完成(我得到一个模型,我对测试集进行预测,验证分类器等),我担心这个错误。 任何想法错误是什么意思?任何建议如何克服它? (我使用10,000作为最大迭代次数)

1 个答案:

答案 0 :(得分:3)

问题出在LBFGS优化器上,Logistic回归算法正在使用它。

当梯度错误或收敛容差设置得太紧时,很可能发生此错误。

就我而言,我运行的算法如下:

new LogisticRegression().
        setFitIntercept(true).
        setRegParam(0.3).
        setMaxIter(100000).
        setTol(0.0).
        setStandardization(true).
        setWeightCol("classWeightCol").setLabelCol("label").setFeaturesCol("features")

迭代的收敛容差设置为0(setTol(0.0))Spark文档状态:

"Smaller value will lead to higher accuracy with the cost of more iterations. Default is 1E-6. "

但是,一旦将setter更改为setTol(0.1),就不会再出现行搜索错误。

建模时无法收敛的其他可能性是增加迭代次数