使用Scipy实现Logistic回归:为什么此Scipy优化会返回全零?

时间:2015-11-10 20:02:30

标签: python optimization machine-learning scipy numerical-methods

我正在努力实现像Andrew Ng的machine learning class那样的一对多逻辑回归,他在他的实现中使用了一个名为fmincg的八度函数。我曾尝试在scipy.optimize.minimize中使用多个函数,但无论我输入什么,我都会在分类器输出中得到全零。

在过去的几个小时里,我已经查看了大量资源,但最有帮助的是this stack overflow postthis blog post

是否有任何明显或不那么明显的地方让我的实施误入歧途?

import scipy.optimize as op

def sigmoid(z):
    """takes matrix and returns result of passing through sigmoid function"""
    return 1.0 / (1.0 + np.exp(-z))

def lrCostFunction(theta, X, y, lam=0):
    """
    evaluates logistic regression cost function:

    theta: coefficients. (n x 1 array)
    X: data matrix (m x n array)
    y: ground truth matrix (m x 1 array)
    lam: regularization constant
    """
    m = len(y)
    theta = theta.reshape((-1,1))
    theta = np.nan_to_num(theta)

    hypothesis = sigmoid(np.dot(X, theta))
    term1 = np.dot(-y.T,np.log(hypothesis))
    term2 = np.dot(1-y.T,np.log(1-hypothesis))
    J = (1/m) * term1 - term2
    J = J + (lam/(2*m))*np.sum(theta[1:]**2)
    return J

def Gradient(theta, X, y, lam=0):
    m = len(y)
    theta = theta.reshape((-1,1))
    hypothesis = sigmoid(np.dot(X, theta))
    residuals = hypothesis - y 
    reg = theta  
    reg[0,:] = 0
    reg[1:,:] = (lam/m)*theta[1:]
    grad = (1.0/m)*np.dot(X.T,residuals)
    grad = grad + reg
    return grad.flatten()

def trainOneVersusAll(X, y, labels, lam=0):
    """
    trains one vs all logistic regression.

    inputs:
        - X and Y should ndarrays with a row for each item in the training set
        - labels is a list of labels to generate probabilities for.
        - lam is a regularization constant

    outputs:
        - "all_theta", shape = (len(labels), n + 1) 
    """
    y = y.reshape((len(y), 1))
    m, n = np.shape(X)
    X = np.hstack((np.ones((m, 1)), X))
    all_theta = np.zeros((len(labels), n + 1))
    for i,c in enumerate(labels):
        initial_theta = np.zeros(n+1)
        result, _, _ = op.fmin_tnc(func=lrCostFunction, 
                            fprime=Gradient,
                            x0=initial_theta, 
                            args=(X, y==c, lam))
        print result
        all_theta[i,:] = result
    return all_theta


def predictOneVsAll(all_theta, X):
    pass

a = np.array([[ 5.,  5.,  6.],[ 6.,  0.,  8.],[ 1.,  1.,  1.], [ 6.,  1.,  9.]])
k = np.array([1,1,0,0])
# a = np.array([[1,0,1],[0,1,0]])
# k = np.array([0,1])
solution = np.linalg.lstsq(a,k)
print 'x', solution[0]
print 'resid', solution[1]
thetas = trainOneVersusAll(a, k, np.unique(k))

1 个答案:

答案 0 :(得分:1)

问题在于您的Gradient功能。在numpy分配是不是复制对象,所以你的行

reg = theta

使reg成为对theta的引用,因此每次计算渐变时,实际上都会修改当前的解决方案。它应该是

reg = theta.copy()

我还建议从随机权重开始

initial_theta = np.random.randn(n+1)

现在解决方案不再是零(虽然我没有检查每个公式,所以仍然可能存在数学错误)。值得注意的是,对于线性可分的问题,没有正则化的逻辑回归是不适定的(其目标是无界的),所以我建议用lam>0进行测试。