LogisticRegressionModel手动预测

时间:2016-05-04 19:11:17

标签: scala apache-spark logistic-regression

我试图预测 DataFrame中每一行的标签,但不使用LinearRegressionModeltransform方法,因为别有用心,而是我试图通过使用经典公式1 / (1 + e^(-hθ(x)))手动计算它,请注意我从Apache Spark的存储库中复制了代码并从private对象{{1}复制了几乎所有内容进入它的公共版本。 PD:我没有使用任何BLAS,我只是安装了模型。

regParam

在定义了这些函数并获得模型的参数后,我创建了一个//Notice that I had to obtain intercept, and coefficients from my model val intercept = model.intercept val coefficients = model.coefficients val margin: Vector => Double = (features) => { BLAS.dot(features, coefficients) + intercept } val score: Vector => Double = (features) => { val m = margin(features) 1.0 / (1.0 + math.exp(-m)) } 来计算预测(它接收与UDF相同的特征),稍后我将我的预测与真实模型的比较进行比较非常不同! 那么我错过了什么?我做错了什么?

DenseVector

修改

我甚至尝试在val predict = udf((v: DenseVector) => { val recency = v(0) val frequency = v(1) val tp = score(new DenseVector(Array(recency, frequency))) new DenseVector(Array(tp, 1 - tp)) }) // model's predictions val xf = model.transform(df) df.select(col("id"), predict(col("features")).as("myprediction")) .join(xf, df("id") === xf("id"), "inner") .select(df("id"), col("probability"), col("myprediction")) .show +----+--------------------+--------------------+ | id| probability| myprediction| +----+--------------------+--------------------+ | 31|[0.97579780436514...|[0.98855386037790...| | 231|[0.97579780436514...|[0.98855386037790...| | 431|[0.69794428333266...| [1.0,0.0]| | 631|[0.97579780436514...|[0.98855386037790...| | 831|[0.97579780436514...|[0.98855386037790...| |1031|[0.96509616791398...|[0.99917463322937...| |1231|[0.96509616791398...|[0.99917463322937...| |1431|[0.96509616791398...|[0.99917463322937...| |1631|[0.94231815700848...|[0.99999999999999...| |1831|[0.96509616791398...|[0.99917463322937...| |2031|[0.96509616791398...|[0.99917463322937...| |2231|[0.96509616791398...|[0.99917463322937...| |2431|[0.95353743438055...| [1.0,0.0]| |2631|[0.94646924057674...| [1.0,0.0]| |2831|[0.96509616791398...|[0.99917463322937...| |3031|[0.96509616791398...|[0.99917463322937...| |3231|[0.95971207153567...|[0.99999999999996...| |3431|[0.96509616791398...|[0.99917463322937...| |3631|[0.96509616791398...|[0.99917463322937...| |3831|[0.96509616791398...|[0.99917463322937...| +----+--------------------+--------------------+ 中定义这些函数,但没有用。

udf

1 个答案:

答案 0 :(得分:1)

这非常令人尴尬,但实际上问题是因为我使用Pipeline并添加了MinMaxScaler作为舞台,因此数据集在模型训练之前进行了缩放,因此两个参数coefficients并且intercept缩放数据相关联,因此当我使用它们计算预测时,结果完全有偏差。因此,为了解决这个问题,我只是将训练数据集非标准化,这样我就可以获得coefficientsintercept。在我重新执行代码之后,我得到了与Spark相同的结果。另一方面,我听了 @ zero323 ,并将marginscore定义移到了udf的第一个lambda声明中。