如何计算逻辑回归准确度

时间:2017-11-22 15:04:01

标签: python machine-learning logistic-regression

我是python中机器学习和编码的完全初学者,我的任务是从头开始编写逻辑回归,以了解幕后发生的事情。到目前为止,我已经编码了假设函数,成本函数和梯度下降,然后编码进行逻辑回归。然而,在编码打印精度时,我得到低输出(0.69),这不会随着迭代次数的增加或学习速率的变化而变化。我的问题是,我的准确度代码存在问题吗?任何指向正确方向的帮助都将受到赞赏

X = data[['radius_mean', 'texture_mean', 'perimeter_mean',
   'area_mean', 'smoothness_mean', 'compactness_mean', 'concavity_mean',
   'concave points_mean', 'symmetry_mean', 'fractal_dimension_mean',
   'radius_se', 'texture_se', 'perimeter_se', 'area_se', 'smoothness_se',
   'compactness_se', 'concavity_se', 'concave points_se', 'symmetry_se',
   'fractal_dimension_se', 'radius_worst', 'texture_worst',
   'perimeter_worst', 'area_worst', 'smoothness_worst',
   'compactness_worst', 'concavity_worst', 'concave points_worst',
   'symmetry_worst', 'fractal_dimension_worst']]
X = np.array(X)
X = min_max_scaler.fit_transform(X)
Y = data["diagnosis"].map({'M':1,'B':0})
Y = np.array(Y)

X_train,X_test,Y_train,Y_test = train_test_split(X,Y,test_size=0.25)

X = data["diagnosis"].map(lambda x: float(x))

def Sigmoid(z):
    if z < 0:
        return 1 - 1/(1 + math.exp(z))
    else:
        return 1/(1 + math.exp(-z))

def Hypothesis(theta, x):
    z = 0
    for i in range(len(theta)):
        z += x[i]*theta[i]
    return Sigmoid(z)

def Cost_Function(X,Y,theta,m):
    sumOfErrors = 0
    for i in range(m):
        xi = X[i]
        hi = Hypothesis(theta,xi)
        error = Y[i] * math.log(hi if  hi >0 else 1)
        if Y[i] == 1:
            error = Y[i] * math.log(hi if  hi >0 else 1)
        elif Y[i] == 0:
            error = (1-Y[i]) * math.log(1-hi  if  1-hi >0 else 1)
        sumOfErrors += error

    constant = -1/m
    J = constant * sumOfErrors
    #print ('cost is: ', J ) 
    return J

def Cost_Function_Derivative(X,Y,theta,j,m,alpha):
    sumErrors = 0
    for i in range(m):
        xi = X[i]
        xij = xi[j]
        hi = Hypothesis(theta,X[i])
        error = (hi - Y[i])*xij
        sumErrors += error
    m = len(Y)
    constant = float(alpha)/float(m)
    J = constant * sumErrors
    return J

def Gradient_Descent(X,Y,theta,m,alpha):
    new_theta = []
    constant = alpha/m
    for j in range(len(theta)):
        CFDerivative = Cost_Function_Derivative(X,Y,theta,j,m,alpha)
        new_theta_value = theta[j] - CFDerivative
        new_theta.append(new_theta_value)
    return new_theta


def Accuracy(theta):
    correct = 0
    length = len(X_test, Hypothesis(X,theta))
    for i in range(length):
        prediction = round(Hypothesis(X[i],theta))
        answer = Y[i]
    if prediction == answer.all():
            correct += 1
    my_accuracy = (correct / length)*100
    print ('LR Accuracy %: ', my_accuracy)



def Logistic_Regression(X,Y,alpha,theta,num_iters):
    theta = np.zeros(X.shape[1])
    m = len(Y)
    for x in range(num_iters):
        new_theta = Gradient_Descent(X,Y,theta,m,alpha)
        theta = new_theta
        if x % 100 == 0:
            Cost_Function(X,Y,theta,m)
            print ('theta: ', theta)    
            print ('cost: ', Cost_Function(X,Y,theta,m))
    Accuracy(theta)

initial_theta = [0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]  
alpha = 0.0001
iterations = 1000
Logistic_Regression(X,Y,alpha,initial_theta,iterations)

这是使用来自威斯康星乳腺癌数据集(https://www.kaggle.com/uciml/breast-cancer-wisconsin-data)的数据,其中我正在权衡30个特征 - 尽管将特征更改为已知相关的特征也不会改变我的准确性。

4 个答案:

答案 0 :(得分:1)

我不确定你是如何得出0.0001 alpha的值,但我觉得它太低了。将您的代码与癌症数据结合使用表明,每次迭代都会降低成本 - 它只是冰川消失。

当我将其提高到0.5时,我仍然会降低成本,但处于更合理的水平。经过1000次迭代后,它会报告:

cost:  0.23668000993020666

在修复Accuracy功能后,我在数据的测试段上得到92%。

您已安装Numpy,如X = np.array(X)所示。您应该考虑将它用于您的操作。对于像这样的工作,它将数量级更快。这是一个矢量化版本,可以立即给出结果而不是等待:

import math
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split

df = pd.read_csv("cancerdata.csv")
X = df.values[:,2:-1].astype('float64')
X = (X - np.mean(X, axis =0)) /  np.std(X, axis = 0)

## Add a bias column to the data
X = np.hstack([np.ones((X.shape[0], 1)),X])
X = MinMaxScaler().fit_transform(X)
Y = df["diagnosis"].map({'M':1,'B':0})
Y = np.array(Y)
X_train,X_test,Y_train,Y_test = train_test_split(X,Y,test_size=0.25)


def Sigmoid(z):
    return 1/(1 + np.exp(-z))

def Hypothesis(theta, x):   
    return Sigmoid(x @ theta) 

def Cost_Function(X,Y,theta,m):
    hi = Hypothesis(theta, X)
    _y = Y.reshape(-1, 1)
    J = 1/float(m) * np.sum(-_y * np.log(hi) - (1-_y) * np.log(1-hi))
    return J

def Cost_Function_Derivative(X,Y,theta,m,alpha):
    hi = Hypothesis(theta,X)
    _y = Y.reshape(-1, 1)
    J = alpha/float(m) * X.T @ (hi - _y)
    return J

def Gradient_Descent(X,Y,theta,m,alpha):
    new_theta = theta - Cost_Function_Derivative(X,Y,theta,m,alpha)
    return new_theta

def Accuracy(theta):
    correct = 0
    length = len(X_test)
    prediction = (Hypothesis(theta, X_test) > 0.5)
    _y = Y_test.reshape(-1, 1)
    correct = prediction == _y
    my_accuracy = (np.sum(correct) / length)*100
    print ('LR Accuracy %: ', my_accuracy)

def Logistic_Regression(X,Y,alpha,theta,num_iters):
    m = len(Y)
    for x in range(num_iters):
        new_theta = Gradient_Descent(X,Y,theta,m,alpha)
        theta = new_theta
        if x % 100 == 0:
            #print ('theta: ', theta)    
            print ('cost: ', Cost_Function(X,Y,theta,m))
    Accuracy(theta)

ep = .012

initial_theta = np.random.rand(X_train.shape[1],1) * 2 * ep - ep
alpha = 0.5
iterations = 2000
Logistic_Regression(X_train,Y_train,alpha,initial_theta,iterations)

我想我可能有不同版本的scikit,因为我更改了MinMaxScaler行以使其正常工作。结果是我可以在眨眼间进行10K迭代,并且将模型应用到测试集的结果的准确度大约为97%。

答案 1 :(得分:0)

准确度是最直观的性能指标之一,它只是正确预测的观察与总观察的比率。更高的准确性意味着模型预先形成得更好。

Accuracy = TP+TN/TP+FP+FN+TN

TP = True positives
TN = True negatives
FN = False negatives
TN = True negatives

当您使用准确度测量时,您的误报和漏报应该具有相似的成本。更好的指标是F1分数,由

给出
F1-score = 2*(Recall*Precision)/Recall+Precision where,

Precision = TP/TP+FP
Recall = TP/TP+FN

在这里阅读更多内容

https://en.wikipedia.org/wiki/Precision_and_recall

python中关于机器学习的美妙之处在于像scikit-learn这样的重要模块是开源的,所以你总能看到实际的代码。 请使用以下链接scikit学习指标源代码,这将让您了解scikit-learn如何计算准确度得分

from sklearn.metrics import accuracy_score
accuracy_score(y_true, y_pred)

https://github.com/scikit-learn/scikit-learn/tree/master/sklearn/metrics

答案 2 :(得分:0)

Python为我们提供了这个scikit-learn库,使我们的工作更加轻松, 这对我有用:

from sklearn.metrics import accuracy_score

y_pred = log.predict(x_test)

score =accuracy_score(y_test,y_pred)

答案 3 :(得分:0)

这也适用于使用矢量化来计算准确度 但是准确度不是推荐的指标,如上述答案所述(如果数据不平衡,则不应使用准确度,而应使用 F1-score)

clf = sklearn.linear_model.LogisticRegressionCV();
    clf.fit(X.T, Y.T);
    LR_predictions = clf.predict(X.T)
    print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) +
           '% ' + "(percentage of correctly labelled datapoints)")