了解逻辑回归的这种实现

时间:2017-01-31 13:58:59

标签: python scikit-learn logistic-regression

按照scikit-learn实施Logistic回归的示例: https://analyticsdataexploration.com/logistic-regression-using-python/

运行预测后,生成以下内容:

predictions=modelLogistic.predict(test[predictor_Vars])
predictions
array([0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1,
       0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0,
       0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0,
       1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0,
       1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1,
       0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
       1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1,
       0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0,
       1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1,
       0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0,
       0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0,
       0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1,
       0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0,
       0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0,
       0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,
       1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1,
       1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0,
       1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0,
       1, 0, 0, 0], dtype=int64)

我无法理解array值。我认为它们与逻辑函数有关,并且输出它认为标签的内容,但是这些值应该在0和1而不是0或1之间?

阅读predict函数的文档:

predict(X)
Predict class labels for samples in X.
Parameters: 
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Samples.
Returns:    
C : array, shape = [n_samples]
Predicted class label per sample.

获取返回数组的前5个值:0,1,0,0,1如何将这些值解释为标签?

完整代码:

import numpy as np
import pandas as pd
from sklearn import linear_model
from sklearn import cross_validation


import matplotlib.pyplot as plt
%matplotlib inline

train=pd.read_csv('/train.csv')
test=pd.read_csv('/test.csv')

def data_cleaning(train):
    train["Age"] = train["Age"].fillna(train["Age"].median())
    train["Fare"] = train["Age"].fillna(train["Fare"].median())
    train["Embarked"] = train["Embarked"].fillna("S")


    train.loc[train["Sex"] == "male", "Sex"] = 0
    train.loc[train["Sex"] == "female", "Sex"] = 1

    train.loc[train["Embarked"] == "S", "Embarked"] = 0
    train.loc[train["Embarked"] == "C", "Embarked"] = 1
    train.loc[train["Embarked"] == "Q", "Embarked"] = 2

    return train

train=data_cleaning(train)
test=data_cleaning(test)

predictor_Vars = [ "Sex", "Age", "SibSp", "Parch", "Fare"]

X, y = train[predictor_Vars], train.Survived

X.iloc[:5]

y.iloc[:5]

modelLogistic = linear_model.LogisticRegression()

modelLogisticCV= cross_validation.cross_val_score(modelLogistic,X,y,cv=15)

modelLogistic = linear_model.LogisticRegression()
modelLogistic.fit(X,y)
#predict(X) Predict class labels for samples in X.
predictions=modelLogistic.predict(test[predictor_Vars])

更新:

从测试数据集中打印前10个元素:

enter image description here

可以看到它匹配数组的前10个元素的预测:

0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0

因此,这些是对test数据集应用逻辑回归后train数据集的逻辑回归预测。

1 个答案:

答案 0 :(得分:2)

如文档中所述,predict函数返回的值是类标签(就像您提供给fit函数的值一样)。在你的情况下1为幸存者而0为未幸存者。

如果你想要每个预测的分数,你应该使用decision_function,它返回介于-1和1之间的值。

我希望这能回答你的问题。