Java Spark MLlib:出现错误" ERROR OWLQN:失败!重置历史记录:breeze.optimize.NaNHistory:"用于ml库中的Logistic回归

时间:2017-07-28 19:51:35

标签: java hadoop apache-spark logistic-regression apache-spark-ml

我只是尝试使用Apache Spark ml库进行Logistic回归,但每当我尝试它时,都会出现错误消息,例如

"错误OWLQN:失败!重置历史记录:breeze.optimize.NaNHistory:"

逻辑回归的数据集示例如下:

+-----+---------+---------+---------+--------+-------------+
|state|dayOfWeek|hourOfDay|minOfHour|secOfMin|     features|
+-----+---------+---------+---------+--------+-------------+
|  1.0|      7.0|      0.0|      0.0|     0.0|(4,[0],[7.0])|

逻辑回归的代码如下:

//Data Set
StructType schema = new StructType(
new StructField[]{
    new StructField("state", DataTypes.DoubleType, false, Metadata.empty()),
    new StructField("dayOfWeek", DataTypes.DoubleType, false, Metadata.empty()),
    new StructField("hourOfDay", DataTypes.DoubleType, false, Metadata.empty()),
    new StructField("minOfHour", DataTypes.DoubleType, false, Metadata.empty()),
    new StructField("secOfMin", DataTypes.DoubleType, false, Metadata.empty())
});
List<Row> dataFromRDD = bucketsForMLs.map(p -> {
    return RowFactory.create(p.label(), p.features().apply(0), p.features().apply(1), p.features().apply(2), p.features().apply(3));
}).collect();

Dataset<Row> stateDF = sparkSession.createDataFrame(dataFromRDD, schema);
String[] featureCols = new String[]{"dayOfWeek", "hourOfDay", "minOfHour", "secOfMin"};
VectorAssembler vectorAssembler = new VectorAssembler().setInputCols(featureCols).setOutputCol("features");
Dataset<Row> stateDFWithFeatures = vectorAssembler.transform(stateDF);

StringIndexer labelIndexer = new StringIndexer().setInputCol("state").setOutputCol("label");
Dataset<Row> stateDFWithLabelAndFeatures = labelIndexer.fit(stateDFWithFeatures).transform(stateDFWithFeatures);

MLRExecutionForDF mlrExe = new MLRExecutionForDF(javaSparkContext);
mlrExe.execute(stateDFWithLabelAndFeatures);

// Logistic Regression part
LogisticRegressionModel lrModel = new LogisticRegression().setMaxIter(maxItr).setRegParam(regParam).setElasticNetParam(elasticNetParam)  
// This part would occur error
.fit(stateDFWithLabelAndFeatures);

1 个答案:

答案 0 :(得分:0)

我只是遇到了相同的错误。它来自Spark刚刚导入的breeze ScalaNLP软件包。它说关于衍生物的某些事情无法产生。

我不确定这到底是什么意思,但是在我的数据集中,我可能会争辩说,我使用的数据越少,抛出错误的频率就越高。这意味着要培训的班级缺少更多功能,错误发生的频率更高。我认为这与由于缺少类的信息而导致无法正确优化有关。

尽管如此,该错误似乎并没有阻止代码运行。