每次梯度下降迭代后线性回归损失值增加

时间:2020-10-02 11:12:01

标签: python numpy machine-learning scikit-learn linear-regression

我正在尝试实现多元线性回归(梯度下降和mse成本函数),但是对于梯度下降的每次迭代,损耗值都呈指数增长,我无法弄清楚为什么?

from sklearn.datasets import load_boston


class LinearRegression:

    def __init__(self):
        self.X = None  # The feature vectors [shape = (m, n)]
        self.y = None  # The regression outputs [shape = (m, 1)]
        self.W = None  # The parameter vector `W` [shape = (n, 1)]
        self.bias = None  # The bias value `b`
        self.lr = None  # Learning Rate `alpha`
        self.m = None
        self.n = None
        self.epochs = None

    def fit(self, X: np.ndarray, y: np.ndarray, epochs: int = 100, lr: float = 0.001):
        self.X = X  # shape (m, n)
        self.m, self.n = X.shape
        assert y.size == self.m and y.shape[0] == self.m
        self.y = np.reshape(y, (-1, 1))  # shape (m, ) or (m, 1)
        assert self.y.shape == (self.m, 1)
        self.W = np.random.random((self.n, 1)) * 1e-3  # shape (n, 1)
        self.bias = 0.0
        self.epochs = epochs
        self.lr = lr
        self.minimize()

    def minimize(self, verbose: bool = True):
        for num_epoch in range(self.epochs):
            predictions = np.dot(self.X, self.W)

            assert predictions.shape == (self.m, 1)
            grad_w = (1/self.m) * np.sum((predictions-self.y) * self.X, axis=0)[:, np.newaxis]
            self.W = self.W - self.lr * grad_w
            assert self.W.shape == grad_w.shape
            loss = (1 / 2 * self.m) * np.sum(np.square(predictions - self.y))

            if verbose:
                print(f'Epoch : {num_epoch+1}/{self.epochs} \t Loss : {loss.item()}')


linear_regression = LinearRegression()
x_train, y_train = load_boston(return_X_y=True)
linear_regression.fit(x_train, y_train, 10)

我正在使用sklearn的波士顿住房数据集。

PS。我想知道导致此问题的原因,解决方法以及我的实现是否正确。

谢谢

1 个答案:

答案 0 :(得分:1)

错误在于渐变。对于迭代收缩阈值算法(ISTA)求解器而言,这种差异是不应该看到的。 对于梯度计算:X的形状为(m,n),W的形状为(n,1),因此(预测-y)的形状为(m,1),然后乘以左边的X? (m,1)乘(m,n)?不确定正在计算的是什么numpy,但不是要计算的:

grad_w =(1 / self.m)* np.sum((predictions-self.y)* self.X,axis = 0)[:, np.newaxis]

这里的代码应该有点不同,用(n,m)乘以(m,1)才能得到与W相同的形状(n,1)。

(1 / self.m)* np.sum(self.X.T *(predictions-self.y),axis = 0)[:, np.newaxis]

为了使推导正确。

我也不确定为什么将点(一个好主意)用于预测而不将其用于渐变。

您也不需要太多的调整:

thischart.removeAnnotation(thischart.currentAnnotation);

然后检查渐变是否合适:

from sklearn.datasets import load_boston

A,b = load_boston(return_X_y=True)
n_samples = A.shape[0]
n_features = A.shape[1]

def grad_linreg(x):
    """Least-squares gradient"""
    grad = (1. / n_samples) * np.dot(A.T, np.dot(A, x) - b)
    return grad

def loss_linreg(x):
    """Least-squares loss"""
    f = (1. / (2. * n_samples)) * sum((b - np.dot(A, x)) ** 2)
    return f

然后可以在此基础上构建模型。 如果要使用ISTA / FISTA和Logistic / Linear回归以及LASSO / RIDGE进行测试,请使用jupyter notebook with the theory and a working example