Softmax回归梯度

时间:2018-10-08 07:15:06

标签: regression gradient logistic-regression softmax

Ufldl softmax regression开始,成本函数的梯度为enter image description here 我尝试在Python中实现它,但损失几乎没有改变:

def update_theta(x, y, theta, learning_rate):
# 4 classes, 3 features
theta_gradients = np.zeros((4, 3)).astype(np.float)

for j in range(4):
    for i in range(len(x)):
        # p: softmax P(y = j|x, theta)
        p = softmax(sm_input(x[i], theta))[y[i]]
        # target function {y = j}
        p -= 1 if y[i] == j else 0
        x[i] = p * x[i]
        # sum gradients
        theta_gradients[j] += x[i]
    theta_gradients[j] = theta_gradients[j] / len(x)

theta = theta.T - learning_rate * theta_gradients
return theta.T

我的前10次失误和失误:

1.3863767797767788
train acc cnt 3
1.386293406734411
train acc cnt 255
1.3862943723056675
train acc cnt 3
1.3862943609888068
train acc cnt 255
1.386294361121427
train acc cnt 3
1.3862943611198806
train acc cnt 254
1.386294361119894
train acc cnt 4
1.3862943611198937
train acc cnt 125
1.3862943611198937
train acc cnt 125
1.3862943611198937
train acc cnt 125

我不知道我是否误解了方程式,任何建议都将不胜感激!

1 个答案:

答案 0 :(得分:0)

是不是总是在update_theta函数中初始化theta_gradients?

通常,梯度的每一步都应从先前的theta中学习。

仅作为示例:

def step_gradient(theta_current, X, y, learning_rate):
    preds = predict_abs(theta_current, X)
    theta_gradient = -(2 / len(y)) * np.dot(X.T, (y - preds))
    theta = theta_current - learning_rate * theta_gradient
    return theta