简单回归适用于randn,但不适用于随机

时间:2016-08-12 10:45:32

标签: python machine-learning regression logistic-regression logarithm

昨晚我写了一个简单的二进制逻辑回归python代码。 它似乎工作正常(每次迭代的可能性都会增加,我得到了很好的分类结果)。

我的问题是我只能用W = np.random.randn(n+1, 1)正态分布初始化我的权重。

但我不想要正常发行,我想要统一发行。但是当我这样做时,我得到了错误

"RuntimeWarning: divide by zero encountered in log
  return np.dot(Y.T, np.log(predictions)) + np.dot((onesVector - Y).T, np.log(onesVector - predictions))"

这是我的代码

import numpy as np
import matplotlib.pyplot as plt

def sigmoid(x):
    return 1/(1+np.exp(-x))

def predict(X, W):
    return sigmoid(np.dot(X, W))

def logLikelihood(X, Y, W):
    m = X.shape[0]
    predictions = predict(X, W)
    onesVector = np.ones((m, 1))
    return np.dot(Y.T, np.log(predictions)) + np.dot((onesVector - Y).T, np.log(onesVector - predictions))

def gradient(X, Y, W):
    return np.dot(X.T, Y - predict(X, W))

def successRate(X, Y, W):
    m = Y.shape[0]
    predictions = predict(X, W) > 0.5
    correct = (Y == predictions)
    return 100 * np.sum(correct)/float(correct.shape[0])

trX = np.load("binaryMnistTrainX.npy")
trY = np.load("binaryMnistTrainY.npy")
teX = np.load("binaryMnistTestX.npy")
teY = np.load("binaryMnistTestY.npy")

m, n = trX.shape
trX = np.concatenate((trX, np.ones((m, 1))),axis=1)
teX = np.concatenate((teX, np.ones((teX.shape[0], 1))),axis=1)
W = np.random.randn(n+1, 1)

learningRate = 0.00001
numIter = 500

likelihoodArray = np.zeros((numIter, 1))

for i in range(0, numIter):
    W = W + learningRate * gradient(trX, trY, W)
    likelihoodArray[i, 0] = logLikelihood(trX, trY, W)

print("train success rate is %lf" %(successRate(trX, trY, W)))
print("test success rate is %lf" %(successRate(teX, teY, W)))

plt.plot(likelihoodArray)
plt.show()

如果我将我的W初始化为零或randn则可行。 如果我把它初始化为随机(不正常)或者那些,那么我得到除零的东西。

为什么会发生这种情况,我该如何解决?

0 个答案:

没有答案