python中lightGBM的自定义多类对数丢失函数返回错误

时间:2019-09-25 07:08:04

标签: python machine-learning data-science lightgbm

我正在尝试使用自定义目标函数实现lightGBM分类器。我的目标数据分为四类,我的数据分为12个观察值的自然组。

自定义目标函数实现两件事:

  1. 预测的模型输出必须是概率性的,并且每次观察的概率必须加总为一个。这也称为softmax目标函数,实现起来相对简单
  2. 每个类别的概率在每个组中之和必须为1。这已在二项式分类空间中实现,称为条件对数模型。

总而言之,对于每个组(在我的情况下为4个观察值),每列和每一行的概率总和应为1。我已经编写了一个有点怪异的函数来实现此目的,但是当我尝试在python的xgb框架内运行自定义目标函数时,出现以下错误:

  

TypeError:无法解压缩不可迭代的numpy.float64对象

我的完整代码如下:

import lightgbm as lgb
import numpy as np
import pandas as pd

def standardiseProbs(preds, groupSize, eta = 0.1, maxIter = 100):

    # add groupId to preds dataframe
    n = preds.shape[0]
    if n % groupSize != 0:
        print('The selected group size paramter is not compatible with the data')
    preds['groupId'] = np.repeat(np.arange(0, int(n/groupSize)), groupSize)

    #initialise variables
    error = 10000
    i = 0

    # perform loop while error exceeds set threshold (subject to maxIter)
    while error > eta and i<maxIter:
        i += 1
        # get sum of probabilities by game
        byGroup = preds.groupby('groupId')[0, 1, 2, 3].sum().reset_index()
        byGroup.columns = ['groupId', '0G', '1G', '2G', '3G']

        if '3G' in list(preds.columns):
            preds = preds.drop(['3G', '2G', '1G', '0G'], axis=1)
        preds = preds.merge(byGroup, how='inner', on='groupId')

        # adjust probs to be consistent across a game
        for v in [1, 2, 3]:
            preds[v] = preds[v] / preds[str(v) + 'G']

        preds[0] = (groupSize-3)* (preds[0] / preds['0G'])

        # sum probabilities by player
        preds['rowSum'] = preds[3] + preds[2] + preds[1] + preds[0]

        # adjust probs to be consistent across a player
        for v in [0, 1, 2, 3]:
            preds[v] = preds[v] / preds['rowSum']

        # get sum of probabilities by game
        byGroup = preds.groupby('groupId')[0, 1, 2, 3].sum().reset_index()
        byGroup.columns = ['groupId', '0G', '1G', '2G', '3G']

        # calc error
        errMat = abs(np.subtract(byGroup[['0G', '1G', '2G', '3G']].values, np.array([(groupSize-3), 1, 1, 1])))
        error = sum(sum(errMat))

    preds = preds[['groupId', 0, 1, 2, 3]]
    return preds

def condObjective(preds, train):
    labels = train.get_label()
    preds = pd.DataFrame(np.reshape(preds, (int(preds.shape[0]/4), 4), order='C'), columns=[0,1,2,3])
    n = preds.shape[0]
    yy = np.zeros((n, 4))
    yy[np.arange(n), labels] = 1
    preds['matchId'] = np.repeat(np.arange(0, int(n/4)), 4)
    preds = preds[['matchId', 0, 1, 2, 3]]
    preds = standardiseProbs(preds, groupSize = 4, eta=0.001, maxIter=500)
    preds = preds[[0, 1, 2, 3]].values
    grad = (preds - yy).flatten()
    hess = (preds * (1. - preds)).flatten()
    return grad, hess

def mlogloss(preds, train):
    labels = train.get_label()
    preds = pd.DataFrame(np.reshape(preds, (int(preds.shape[0]/4), 4), order='C'), columns=[0,1,2,3])
    n = preds.shape[0]
    yy = np.zeros((n, 4))
    yy[np.arange(n), labels] = 1
    preds['matchId'] = np.repeat(np.arange(0, int(n/4)), 4)
    preds = preds[['matchId', 0, 1, 2, 3]]
    preds = standardiseProbs(preds, groupSize = 4, eta=0.001, maxIter=500)
    preds = preds[[0, 1, 2, 3]].values
    loss = -(np.sum(yy*np.log(preds)+(1-yy)*np.log(1-preds))/n)
    return loss

n, k = 880, 5

xtrain = np.random.rand(n, k)
ytrain = np.random.randint(low=0, high=2, size=n)
ltrain = lgb.Dataset(xtrain, label=ytrain)
xtest = np.random.rand(int(n/2), k)
ytest = np.random.randint(low=0, high=2, size=int(n/2))
ltest = lgb.Dataset(xtrain, label=ytrain)

lgbmParams = {'boosting_type': 'gbdt', 
              'num_leaves': 250, 
              'max_depth': 3,
              'min_data_in_leaf': 10, 
              'min_gain_to_split': 0.75, 
              'learning_rate': 0.01, 
              'subsample_for_bin': 120100, 
              'min_child_samples': 70, 
              'reg_alpha': 1.45, 
              'reg_lambda': 2.5, 
              'feature_fraction': 0.45, 
              'bagging_fraction': 0.55, 
              'is_unbalance': True, 
              'objective': 'multiclass', 
              'num_class': 4, 
              'metric': 'multi_logloss', 
              'verbose': 1}

lgbmModel = lgb.train(lgbmParams, ltrain, valid_sets=ltest,fobj=condObjective, feval=mlogloss, num_boost_round=5000, early_stopping_rounds=100, verbose_eval=50)

假设没有更好的方法可以迫使我的预测符合我施加的限制性条件,那么我该怎么做才能使自定义目标发挥作用?

2 个答案:

答案 0 :(得分:4)

此错误的问题

    -> 2380                 eval_name, val, is_higher_better = feval_ret // this is the return of mlogloss
       2381                 ret.append((data_name, eval_name, val, is_higher_better))
       2382         return ret
TypeError: 'numpy.float64' object is not iterable

来自功能mlogloss()。因为您将它用作评估函数feval=mlogloss,所以它应该返回3件事:它是名称,值和一个布尔值,指示更高的值是否更好。

def mlogloss(...):
...
return "my_loss_name", loss_value, False

答案 1 :(得分:0)

需要两个函数来进行训练和验证,其中用于训练自定义损失(lgb.train参数中的feval),我们需要“ grad,hess”作为返回,而我们需要grad,hess,Boolean作为返回,在哪个布尔值表示较高的损失值是否更好。

检查一下,而不是我的博客:https://maxhalford.github.io/blog/lightgbm-focal-loss/#lightgbm-custom-loss-function-caveats