如何使用scipy.minimize最小化套索损失函数?

时间:2020-06-23 10:43:44

标签: machine-learning scipy data-science loss-function lasso-regression

主要问题:为什么在scipy.minimize的最小化作用下,套索回归系数不会缩小为零?

我正在尝试使用scipy.minimize创建套索模型。但是,它仅在alpha为零时才起作用(因此仅像基本平方误差一样)。当alpha不为零时,它返回的结果更糟(损失更大),而系数都不为零。

我知道套索是不可微的,但是我尝试使用Powell优化程序,该程序应处理非微分损耗(也尝试过BFGS,该程序应处理非平滑性)。这些优化程序均无效。

为了对此进行测试,我创建了一个数据集,其中y是随机的(此处可重现),X的第一个特征恰好是y * .5,其他四个特征是随机的(此处也可重现)。我希望该算法将这些随机系数缩小为零,并仅保留第一个,但这没有发生。

对于套索损失函数,我使用的是this paper (figure 1, first page)中的公式

我的代码如下:

from scipy.optimize import minimize
import numpy as np

class Lasso:

    def _pred(self,X,w):
        return np.dot(X,w)

    def LossLasso(self,weights,X,y,alpha):
        w = weights
        yp = self._pred(X,w)
        loss = np.linalg.norm(y - yp)**2 + alpha * np.sum(abs(w))
        return loss

    def fit(self,X,y,alpha=0.0):
        initw = np.random.rand(X.shape[1]) #initial weights
        res = minimize(self.LossLasso,
                    initw,
                    args=(X,y,alpha),
                    method='Powell')
        return res

if __name__=='__main__':
    y = np.array([1., 0., 1., 0., 0., 1., 1., 0., 0., 0., 1., 0., 0., 0., 1., 0., 1.,
                  1., 1., 0.])
    X_informative = y.reshape(20,1)*.5
    X_noninformative = np.array([[0.94741352, 0.892991  , 0.29387455, 0.30517762],
                               [0.22743465, 0.66042825, 0.2231239 , 0.16946974],
                               [0.21918747, 0.94606854, 0.1050368 , 0.13710866],
                               [0.5236064 , 0.55479259, 0.47711427, 0.59215551],
                               [0.07061579, 0.80542011, 0.87565747, 0.193524  ],
                               [0.25345866, 0.78401146, 0.40316495, 0.78759134],
                               [0.85351906, 0.39682136, 0.74959904, 0.71950502],
                               [0.383305  , 0.32597392, 0.05472551, 0.16073454],
                               [0.1151415 , 0.71683239, 0.69560523, 0.89810466],
                               [0.48769347, 0.58225877, 0.31199272, 0.37562258],
                               [0.99447288, 0.14605177, 0.61914979, 0.85600544],
                               [0.78071238, 0.63040498, 0.79964659, 0.97343972],
                               [0.39570225, 0.15668933, 0.65247826, 0.78343458],
                               [0.49527699, 0.35968554, 0.6281051 , 0.35479879],
                               [0.13036737, 0.66529989, 0.38607805, 0.0124732 ],
                               [0.04186019, 0.13181696, 0.10475994, 0.06046115],
                               [0.50747742, 0.5022839 , 0.37147486, 0.21679859],
                               [0.93715221, 0.36066077, 0.72510501, 0.48292022],
                               [0.47952644, 0.40818585, 0.89012395, 0.20286356],
                               [0.30201193, 0.07573086, 0.3152038 , 0.49004217]])
    X = np.concatenate([X_informative,X_noninformative],axis=1)

    #alpha zero
    clf = Lasso()
    print(clf.fit(X,y,alpha=0.0))

    #alpha nonzero
    clf = Lasso()
    print(clf.fit(X,y,alpha=0.5))

阿尔法零输出正确时:

     fun: 2.1923913945084075e-24
 message: 'Optimization terminated successfully.'
    nfev: 632
     nit: 12
  status: 0
 success: True
       x: array([ 2.00000000e+00, -1.49737205e-13, -5.49916821e-13,  8.87767676e-13,
        1.75335824e-13])

非零的alpha的输出具有更高的损耗,并且系数的非零如预期的那样为零:

     fun: 0.9714385008821652
 message: 'Optimization terminated successfully.'
    nfev: 527
     nit: 6
  status: 0
 success: True
       x: array([ 1.86644474e+00,  1.63986381e-02,  2.99944361e-03,  1.64568796e-12,
       -6.72908469e-09])

为什么随机特征的系数不缩小到零并且损失如此之大?

2 个答案:

答案 0 :(得分:0)

这是可行的选择吗?

import numpy as np
from sklearn.linear_model import Lasso, Ridge
from sklearn.model_selection import GridSearchCV

y = np.array([1., 0., 1., 0., 0., 1., 1., 0., 0., 0., 1., 0., 0., 0., 1., 0., 1., 1., 1., 0.])
X_informative = y.reshape(20, 1) * .5

X_noninformative = np.array([[0.94741352, 0.892991  , 0.29387455, 0.30517762],
                           [0.22743465, 0.66042825, 0.2231239 , 0.16946974],
                           [0.21918747, 0.94606854, 0.1050368 , 0.13710866],
                           [0.5236064 , 0.55479259, 0.47711427, 0.59215551],
                           [0.07061579, 0.80542011, 0.87565747, 0.193524  ],
                           [0.25345866, 0.78401146, 0.40316495, 0.78759134],
                           [0.85351906, 0.39682136, 0.74959904, 0.71950502],
                           [0.383305  , 0.32597392, 0.05472551, 0.16073454],
                           [0.1151415 , 0.71683239, 0.69560523, 0.89810466],
                           [0.48769347, 0.58225877, 0.31199272, 0.37562258],
                           [0.99447288, 0.14605177, 0.61914979, 0.85600544],
                           [0.78071238, 0.63040498, 0.79964659, 0.97343972],
                           [0.39570225, 0.15668933, 0.65247826, 0.78343458],
                           [0.49527699, 0.35968554, 0.6281051 , 0.35479879],
                           [0.13036737, 0.66529989, 0.38607805, 0.0124732 ],
                           [0.04186019, 0.13181696, 0.10475994, 0.06046115],
                           [0.50747742, 0.5022839 , 0.37147486, 0.21679859],
                           [0.93715221, 0.36066077, 0.72510501, 0.48292022],
                           [0.47952644, 0.40818585, 0.89012395, 0.20286356],
                           [0.30201193, 0.07573086, 0.3152038 , 0.49004217]])
X = np.concatenate([X_informative,X_noninformative], axis=1)

_lasso = Lasso()
_lasso_parms = {'alpha': [1e-15, 1e-10, 1e-8, 1e-4, 1e-3, 1e-2, 1, 5, 10, 20]}
_lasso_regressor = GridSearchCV(_lasso, _lasso_parms, scoring='neg_mean_squared_error', cv=5)

print('_lasso_regressor.fit(X, y)')
print(_lasso_regressor.fit(X, y))

print("\n=========================================\n")
print('lasso_regressor.best_params_: ')
print(_lasso_regressor.best_params_)
print("\n")
print('lasso_regressor.best_score_: ')
print(_lasso_regressor.best_score_)
print("\n=========================================\n")

_ridge = Ridge()
_ridge_parms = {'alpha': [1e-15, 1e-10, 1e-8, 1e-4, 1e-3, 1e-2, 1, 5, 10, 20]}
_ridge_regressor = GridSearchCV(_ridge, _lasso_parms, scoring='neg_mean_squared_error', cv=5)

print('_ridge_regressor.fit(X, y)')
print(_ridge_regressor.fit(X, y))

print("\n=========================================\n")
print('_ridge_regressor.best_params_: ')
print(_ridge_regressor.best_params_)
print("\n")
print('_ridge_regressor.best_score_: ')
print(_ridge_regressor.best_score_)
print("\n=========================================\n")

和输出: enter image description here

答案 1 :(得分:0)

您是否尝试过将套索损失最小化与其他数据集一起运行?利用您提供的数据,正则化(l1罚分)几乎代表了损失函数值的全部。当您增加alpha值时,您会将损失函数的幅度增加了几个数量级,高于损失函数以真实的最佳系数2.0返回的值。

loss vs alpha