为什么Python中的逻辑回归实现不正确?

时间:2016-06-09 09:12:18

标签: python machine-learning logistic-regression

我在Python中实现了逻辑回归。我认为代码中存在一些错误。我无法获得测试集的正确准确性。 这是代码:

from __future__ import division
import numpy as np
from math import *
import os, sys


class LogisticRegressionModel:

    def __init__(self, n):
        self.n = n
        self.theta = np.zeros((n+1, 1))
        print(self.theta)


    def SGD(self, trainingSet, epochs, minibatchsize, eta):
        m = len(trainingSet)
        for epoch in range(epochs):
            derSum = np.zeros(self.theta.shape)
            for xi, yi in trainingSet:
                xi = np.concatenate(([[1]], xi), axis=0)
                #print(xi)
                hi = self.sigmoid(np.dot(np.transpose(self.theta), xi))
                derSum = derSum + (hi-yi)*xi

            self.theta = self.theta - eta/m*derSum

            print(self.cost(trainingSet))


    def cost(self, dataset):
        totCost=0
        for xi, yi in dataset:
            xi = np.concatenate(([[1]], xi), axis=0)
            hi = self.sigmoid(np.dot(np.transpose(self.theta), xi))
            totCost += -1*(yi*log(hi)+(1-yi)*log(1-hi))

        return totCost/len(dataset)



    def sigmoid(self, z):
        return 1.0/(1.0+np.exp(-1*z))


    def evaluate(self, testSet):
        mtest = len(testSet)
        count=0
        for xi, yi in testSet:
            xi = np.concatenate(([[1]], xi), axis=0)
            hi = self.sigmoid(np.dot(self.theta.transpose(), xi))
            #print(str(hi[0, 0])+" "+str(yi))
            if hi>=0.5:
                hi=1
            else:
                hi=0
            if yi==hi:
                count+=1
        print(count/mtest*100)

LR是一个两级分类器。数据集具有线性决策边界,我使用Octave测试它,其精确度超过95%。但上述实施率约为60%。我也试过改变学习率和其他事情。但这并没有帮助。

1 个答案:

答案 0 :(得分:1)

假设您的训练数据是包含([feature1,...,featuren], label)对等的列表,以下代码似乎对我有用。它是对代码的修改,除了我在适当的位置放置数组形式:

from __future__ import division
import numpy as np

def sigmoid(z):
    return 1/(1+np.exp(-z))

def log_loss(y,ypred):
    return -(y*np.log(ypred) + (1-y)*np.log(1-ypred)).mean()

class LogisticRegressionModel:

    def __init__(self, n):
        self.n = n
        self.theta = np.zeros((1,n+1))
        print(self.theta)


    def SGD(self, trainingSet, epochs, minibatchsize, eta):
        m = len(trainingSet)
        X = np.ones((self.n+1,m))
        Y = np.zeros((1,m))

        for i, (xi, yi) in enumerate(trainingSet):
            X[1:,i] = xi
            Y[:,i] = yi

        for epoch in xrange(epochs):
            H = sigmoid(self.theta.dot(X))
            derSum = (H-Y).dot(X.T)

            self.theta -= eta * derSum/m

            print(log_loss(Y,H))


    def evaluate(self, testSet):
        mtest = len(testSet)
        X = np.ones((self.n+1,mtest))
        Y = np.zeros((1,mtest))
        for i, (xi, yi) in enumerate(testSet):
            X[1:,i] = xi
            Y[:,i] = yi

        H = sigmoid(self.theta.dot(X))
        H = (H >= 0.5)
        print((H == Y).mean() * 100)

我不确定代码中有什么问题,因为这应该与你的代码完全相同(除了重新加载数据的地方多余)。