我正在尝试使用梯度下降来实现逻辑回归,以在给定一些数据的情况下找到多变量函数的权重。到目前为止,我已经提出了以下内容,gradientDescent()函数使用meanSquareError()输入函数。
import numpy as np
def logisticFunction(x, w):
one = np.array([1])
wTx = np.dot(np.transpose(x), w)
exp = np.exp(wTx)
add = np.add(one, exp)
div = np.divide(one, add)
return div
def logisticError(x, y, w):
logistic = logisticFunction(x, w)
sub = np.subtract(y, logistic)
dot = np.dot(x, sub)
return np.negative(dot)
def gradientDescent(x, y, foo, w0, step, eps_err):
wPrev = w0
error = foo(x, y, wPrev)
wNext = np.subtract(wPrev, np.dot(step, error))
while math.fabs(np.sum(np.subtract(wNext, wPrev))) >= eps_err:
wPrev = wNext
error = foo(x, y, wPrev)
wNext = np.subtract(wPrev, np.dot(step, error))
return wNext
def meanSquareError(x, y, w):
Xw = np.dot(np.transpose(x), w)
sub = np.subtract(y, Xw)
dot = np.dot(x, sub)
return np.multiply(np.array([-2]), dot)
x = np.array([[0.86,0.09,-0.85,0.87,-0.44,-0.43,-1.1,0.40,-0.96,0.17],
[1] * 10])
#print np.transpose(x)
y = np.array([2.49,0.83,-0.25,3.10,0.87,0.02,-0.12,1.81,-0.83,0.43])
eps_err = np.array([0.01] * len(x))
#print logisticError(x, y, np.array([1,1]))
print gradientDescent(x,y,logisticError,np.array([1,1]),0.05,0.1)
当我使用logisticError()函数时,我收到溢出错误,这是因为逻辑函数似乎没有与梯度下降收敛。我似乎可以在正常的在线搜索中找到任何错误,所以任何帮助都会受到赞赏。