在keras中实现自定义目标功能

时间:2018-08-21 17:31:20

标签: python machine-learning keras loss-function

我正在尝试实现自己的成本功能,特别是以下功能:

enter image description here

现在我知道这个问题已经在这个网站上问了好几次了,我读到的答案通常如下:

def custom_objective(y_true, y_pred):
....
return L

人们似乎总是使用y_truey_pred,然后说您只需要编译模型model.compile(loss=custom_objective)并从那里去。没有人在代码y_true=somethingy_pred=something的某个地方真正提到过这一点。是我必须在模型中指定的东西吗?

我的代码

不确定正在训练的模型是否正确使用了.predict()

params = {'lr': 0.0001,
 'batch_size': 30,
 'epochs': 400,
 'dropout': 0.2,
 'optimizer': 'adam',
 'losses': 'avg_partial_likelihood',
 'activation':'relu',
 'last_activation': 'linear'}

def model(x_train, y_train, x_val, y_val):

    l2_reg = 0.4
    kernel_init ='he_uniform' 
    bias_init ='he_uniform'
    layers=[20, 20, 1]

    model = Sequential()

    # layer 1
    model.add(Dense(layers[0], input_dim=x_train.shape[1],
                    W_regularizer=l2(l2_reg),
                    kernel_initializer=kernel_init,
                    bias_initializer=bias_init))


    model.add(BatchNormalization(axis=-1, momentum=momentum, center=True))

    model.add(Activation(params['activation']))

    model.add(Dropout(params['dropout']))

    # layer 2+    
    for layer in range(0, len(layers)-1):

        model.add(Dense(layers[layer+1], W_regularizer=l2(l2_reg),
                        kernel_initializer=kernel_init,
                        bias_initializer=bias_init))


        model.add(BatchNormalization(axis=-1, momentum=momentum, center=True))

        model.add(Activation(params['activation']))

        model.add(Dropout(params['dropout']))

    # Last layer
    model.add(Dense(layers[-1], activation=params['last_activation'],
                    kernel_initializer=kernel_init,
                    bias_initializer=bias_init))

    model.compile(loss=params['losses'],
                  optimizer=keras.optimizers.adam(lr=params['lr']),
                  metrics=['accuracy'])

    history = model.fit(x_train, y_train, 
                        validation_data=[x_val, y_val],
                        batch_size=params['batch_size'],
                        epochs=params['epochs'],
                        verbose=1)

    y_pred = model.predict(x_train, batch_size=params['batch_size'])

    history_dict = history.history

    model_output = {'model':model, 
                    'history_dict':history_dict,
                    'log_risk':y_pred}

    return model_output

然后创建模型:

model(x_train, y_train, x_val, y_val)

到目前为止我的目标功能

'log_risk'将为y_true,而x_train将用于计算y_pred

def avg_partial_likelihood(x_train, log_risk):



    from lifelines import CoxPHFitter

    cph = CoxPHFitter()

    cph.fit(x_train, duration_col='survival_fu_combine', event_col='death',
           show_progress=False)

    # obtain exp(hx)

    cph_output = pd.DataFrame(cph.summary).T

    # summing hazard ratio

    hazard_ratio_sum = cph_output.iloc[1,].sum()

    # -log(sum(exp(hxj)))

    neg_log_sum = -np.log(hazard_ratio_sum)

    # sum of positive events (death==1)

    sum_noncensored_events = (x_train.death==1).sum()

    # neg_likelihood

    neg_likelihood = -(log_risk + neg_log_sum)/sum_noncensored_events

    return neg_likelihood

如果我尝试运行会出错

  AttributeError                            Traceback (most recent call last)
<ipython-input-26-cf0236299ad5> in <module>()
----> 1 model(x_train, y_train, x_val, y_val)

<ipython-input-25-d0f9409c831a> in model(x_train, y_train, x_val, y_val)
     45     model.compile(loss=avg_partial_likelihood,
     46                   optimizer=keras.optimizers.adam(lr=params['lr']),
---> 47                   metrics=['accuracy'])
     48 
     49     history = model.fit(x_train, y_train, 

~\Anaconda3\lib\site-packages\keras\engine\training.py in compile(self, optimizer, loss, metrics, loss_weights, sample_weight_mode, weighted_metrics, target_tensors, **kwargs)
    331                 with K.name_scope(self.output_names[i] + '_loss'):
    332                     output_loss = weighted_loss(y_true, y_pred,
--> 333                                                 sample_weight, mask)
    334                 if len(self.outputs) > 1:
    335                     self.metrics_tensors.append(output_loss)

~\Anaconda3\lib\site-packages\keras\engine\training_utils.py in weighted(y_true, y_pred, weights, mask)
    401         """
    402         # score_array has ndim >= 2
--> 403         score_array = fn(y_true, y_pred)
    404         if mask is not None:
    405             # Cast the mask to floatX to avoid float64 upcasting in Theano

<ipython-input-23-ed57799a1f9d> in avg_partial_likelihood(x_train, log_risk)
     27 
     28     cph.fit(x_train, duration_col='survival_fu_combine', event_col='death',
---> 29            show_progress=False)
     30 
     31     # obtain exp(hx)

~\Anaconda3\lib\site-packages\lifelines\fitters\coxph_fitter.py in fit(self, df, duration_col, event_col, show_progress, initial_beta, strata, step_size, weights_col)
     90         """
     91 
---> 92         df = df.copy()
     93 
     94         # Sort on time

AttributeError: 'Tensor' object has no attribute 'copy'

1 个答案:

答案 0 :(得分:1)

  

在代码中的某个地方没有人真正提到   y_true=somethingy_pred=something ...

他们没有提及它,因为您不需要这样做!实际上,在每次通过的末尾(即,一次向前传播),Keras使用真实标签和该通过的模型预测来馈送y_truey_pred。因此,您根本不需要在模型中定义y_truey_pred。只需使用后端函数(即from keras import backend as K)定义损失函数,一切都会正常工作(并且永远不要在损失函数中使用numpy)。要想出个主意,请看一下Keras中的built-in loss functions并看看它们是如何实现的。 here是可用后端功能的列表(可能不完整)。