自定义RMSE损失函数和内置RMSE的Lightgbm分数不同

时间:2020-05-25 13:52:00

标签: python lightgbm

为了从lightgbm的自定义目标函数开始,我开始复制标准目标RMSE。不幸的是,分数是不同的。 我的示例基于此postgithub
Grad和hess与lightgbm source中的相同或在以下question的答案中给出。
自定义RMSE功能有什么问题?
备注:在此示例中,最终损失似乎接近,但轨迹完全不同。在其他(更大)的例子中,我在最终损失上遇到更大的差异。

import numpy as np
import matplotlib.pyplot as plt
from lightgbm import LGBMRegressor
import lightgbm 
from sklearn.datasets import  make_friedman1
from sklearn.model_selection import train_test_split

X, y = make_friedman1(n_samples=10000, n_features=7, noise=10.0, random_state=11)
X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.20, random_state=42)

gbm2 = lightgbm.LGBMRegressor(objective='rmse', random_state=33, early_stopping_rounds = 5, n_estimators=10000)
gbm2.fit(X_train,  y_train, eval_set=[(X_valid, y_valid)], eval_metric='rmse', verbose=10)
gbm2eval = gbm2.evals_result_

def custom_RMSE(y_true, y_pred):
    residual = (y_pred - y_true)
    grad = residual
    hess = np.ones(len(y_true))
    return grad, hess

gbm3 = lightgbm.LGBMRegressor(random_state=33, early_stopping_rounds = 5, n_estimators=10000)
gbm3.set_params(**{'objective': custom_RMSE})
gbm3.fit(X_train,  y_train, eval_set=[(X_valid, y_valid)], eval_metric='rmse', verbose=10)
gbm3eval = gbm3.evals_result_

plt.plot(gbm2eval['valid_0']['rmse'],label='rmse')
plt.plot(gbm3eval['valid_0']['rmse'],label='custom rmse')
plt.legend()

gbm2的eval_results:

Training until validation scores don't improve for 5 rounds
[10]    valid_0's rmse: 10.214
[20]    valid_0's rmse: 10.044
[30]    valid_0's rmse: 10.0348
Early stopping, best iteration is:
[28]    valid_0's rmse: 10.028

gbm3的eval_results:

Training until validation scores don't improve for 5 rounds
[10]    valid_0's rmse: 11.5991 valid_0's l2: 134.539
[20]    valid_0's rmse: 10.2721 valid_0's l2: 105.516
[30]    valid_0's rmse: 10.0801 valid_0's l2: 101.608
[40]    valid_0's rmse: 10.0424 valid_0's l2: 100.849
Early stopping, best iteration is:
[44]    valid_0's rmse: 10.0351 valid_0's l2: 100.703

,如下图所示: losses for standard RMSE and custom RMSE

1 个答案:

答案 0 :(得分:0)

RMSE是MSE(均方误差)的平方根:

MSE equation

因此,如果要最小化RMSE,应将函数custom_RMSE()更改为残差平方的度量。试试:

def custom_RMSE(y_true, y_pred):
    squared_residual = (y_pred - y_true)**2
    grad = squared_residual
    hess = np.ones(len(y_true))
    return grad, hess

无论如何,custom_RMSE()函数看起来并不像这样:

grad->形状= [n_samples]或shape = [n_samples * n_classes](用于多类任务) 每个采样点的一阶导数(梯度)的值。

hess->形状= [n_samples]或shape = [n_samples * n_classes](用于多类任务) 每个采样点的二阶导数(Hessian)的值。 资源: https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMRegressor.html