如何使用Java在Spark MLlib中获得逻辑回归的p值。如何找到分类的概率。以下是我尝试过的代码:
SparkConf sparkConf = new SparkConf().setAppName("GRP").setMaster("local[*]");
SparkContext ctx = new SparkContext(sparkConf);
LabeledPoint pos = new LabeledPoint(1.0, Vectors.dense(1.0, 0.0, 3.0));
String path = "dataSetnew.txt";
JavaRDD<LabeledPoint> data = MLUtils.loadLibSVMFile(ctx, path).toJavaRDD();
JavaRDD<LabeledPoint>[] splits = data.randomSplit(new double[] {0.6, 0.4}, 11L);
JavaRDD<LabeledPoint> training = splits[0].cache();
JavaRDD<LabeledPoint> test = splits[1];
final org.apache.spark.mllib.classification.LogisticRegressionModel model =
new LogisticRegressionWithLBFGS()
.setNumClasses(2)
.setIntercept(true)
.run(training.rdd());
JavaRDD<Tuple2<Object, Object>> predictionAndLabels = test.map(
new org.apache.spark.api.java.function.Function<LabeledPoint, Tuple2<Object, Object>>() {
public Tuple2<Object, Object> call(LabeledPoint p) {
Double prediction = model.predict(p.features());
// System.out.println("prediction :"+prediction);
return new Tuple2<Object, Object>(prediction, p.label());
}
}
);
Vector denseVecnew = Vectors.dense(112,110,110,0,0,0,0,0,0,0,0);
Double prediction = model.predict(denseVecnew);
Vector weightVector = model.weights();
System.out.println("weights : "+weightVector);
System.out.println("intercept : "+model.intercept());
System.out.println("forecast”+ prediction);
ctx.stop();
答案 0 :(得分:1)
对于二进制分类,您可以使用LogisticRegressionModel.clearThreshold
方法。调用它后predict
将返回原始分数
而不是标签。它们在[0,1]范围内,可以解释为概率。