Mini-Batch梯度下降实现返回无意义的结果

时间:2018-10-01 09:31:55

标签: python numpy machine-learning data-science gradient-descent

我正在尝试实现一个小批量梯度下降算法来评估我的回归模型的性能特征,并且在产生具有良好确定系数的输出方面没有取得太大的成功。我的代码基于预先存在的批处理梯度下降实现,该实现工作正常。

def mini_batch_gradient_descent(y, x, params, iters, alpha, mini_batch_size):
    """
    :param y:               x, NumPy array
    :param x:               y, NumPy array
    :param params:          model parameters (w and b)
    :param iters:           max iterations
    :param alpha:           step size
    :param mini_batch_size: mini batch size
    :return: params         all tracked updated model parameters
             losses         all tracked losses during the learning course
    """

    losses = []
    thetas = []
    num_of_samples = len(x)
    for i in range(iters):
        np.random.shuffle(x)
        for j in range(0, num_of_samples, mini_batch_size):
            batch_x = x[j:j+mini_batch_size]
            batch_y = y[j:j+mini_batch_size]

            gradient = -2 * batch_x.T.dot(batch_y - batch_x.dot(param)) / len(batch_x)
            new_param = new_param - alpha * gradient   
            params.append(new_param)

            losses.append(np.mean(np.power(batch_x.dot(param) - batch_y, 2)))

    return params, losses

该下降产生的r²值为-0.002,依此类推;我不知道为什么。我在这里做任何明显错误的事情吗?

0 个答案:

没有答案