在CNTK中创建自定义错误功能

时间:2017-07-14 14:31:22

标签: python cntk

这是我目前使用CNTK模块在python中进行NN训练的python代码的一部分

batch_axis = C.Axis.default_batch_axis()
input_seq_axis = C.Axis.default_dynamic_axis()

input_dynamic_axes = [batch_axis, input_seq_axis]
input_dynamic_axes2 = [batch_axis, input_seq_axis]

input = C.input_variable(n_ins, dynamic_axes=input_dynamic_axes, dtype=numpy.float32)
output = C.input_variable(n_outs, dynamic_axes=input_dynamic_axes2, dtype=numpy.float32)

dnn_model = cntk_model.create_model(input, hidden_layer_type, hidden_layer_size, n_outs)

loss = C.squared_error(dnn_model, output)
error = C.squared_error(dnn_model, output)

lr_schedule = C.learning_rate_schedule(current_finetune_lr, C.UnitType.minibatch)
            momentum_schedule = C.momentum_schedule(current_momentum)

learner = C.adam(dnn_model.parameters, lr_schedule, momentum_schedule, unit_gain = False, l1_regularization_weight=l1_reg, l2_regularization_weight= l2_reg)    

trainer = C.Trainer(dnn_model, (loss, error), [learner])  

以下是创建NN模型的代码

def create_model(features, hidden_layer_type, hidden_layer_size, n_out):
    logger.debug('Creating cntk model')
    assert len(hidden_layer_size) == len(hidden_layer_type)

    n_layers = len(hidden_layer_size)

    my_layers = list()
    for i in xrange(n_layers):
        if(hidden_layer_type[i] == 'TANH'):
            my_layers.append(C.layers.Dense(hidden_layer_size[i], activation=C.tanh, init=C.layers.glorot_uniform()))
        elif (hidden_layer_type[i] == 'LSTM'):
            my_layers.append(C.layers.Recurrence(C.layers.LSTM(hidden_layer_size[i])))
        else:
            raise Exception('Unknown hidden layer type')

    my_layers.append(C.layers.Dense(n_out, activation=None))

    my_model = C.layers.Sequential([my_layers])
    my_model = my_model(features)

    return my_model

现在,我想改变一个反向传播,所以当计算错误时不使用直接网络输出,而是经过一些额外的计算后输出。我试图定义类似这样的东西

 def create_error_function(self, prediction, target):

    prediction_denorm = C.element_times(prediction, self.std_vector)
    prediction_denorm = C.plus(prediction_denorm, self.mean_vector)
    prediction_denorm_rounded = C.round(C.element_times(prediction_denorm[0:5], C.round(prediction_denorm[5])))
    prediction_denorm_rounded = C.element_divide(prediction_denorm_rounded, C.round(prediction_denorm[5]))

    prediction_norm = C.minus(prediction_denorm_rounded, self.mean_vector[0:5])
    prediction_norm = C.element_divide(prediction_norm, self.std_vector[0:5])

    first =  C.squared_error(prediction_norm, target[0:5])
    second = C.minus(C.round(prediction_denorm[5]), self.mean_vector[5])
    second = C.element_divide(second, self.std_vector[5])

    return C.plus(first, C.squared_error(second, target[5]))

并使用它代替标准squared_error。 和NN培训的部分

dnn_model = cntk_model.create_model(input, hidden_layer_type, hidden_layer_size, n_outs)
 error_function = cntk_model.ErrorFunction(cmp_mean_vector, cmp_std_vector)
 loss = error_function.create_error_function(dnn_model, output)
 error = error_function.create_error_function(dnn_model, output)
 lr_schedule = C.learning_rate_schedule(current_finetune_lr, C.UnitType.minibatch)
 momentum_schedule = C.momentum_schedule(current_momentum)

 learner = C.adam(dnn_model.parameters, lr_schedule, momentum_schedule, unit_gain = False, l1_regularization_weight=l1_reg,
                                 l2_regularization_weight= l2_reg)    

 trainer = C.Trainer(dnn_model, (loss, error), [learner])  
 trainer.train_minibatch({input: temp_train_x, output: temp_train_y}) 

但在两个时代之后,我开始得到总体相同的平均损失,因为我的网络没有学习

1 个答案:

答案 0 :(得分:0)

每次要更改backprop的工作方式时,都需要使用stop_gradient。这是唯一的函数,其梯度不同于前向传递操作的梯度。在前向传球中stop_gradient充当身份。在向后传递中,它阻止渐变传播。

要对前进传球中的某些f(x)进行操作x并假装好像从未在后向传球中发生过,您需要执行以下操作: C.stop_gradient(f(x) - x) + x。在您的情况下将是

norm_features = C.stop_gradient(features/normalization - features) + features