实际上,我想在训练和验证阶段使用不同的损失函数。我尝试了in_tarin_phase,但是它不起作用。
所以我只是想知道我可以禁用val_loss计算吗?
答案 0 :(得分:0)
以下具有自定义损失功能:
# Build a model
inputs = Input(shape=(128,))
layer1 = Dense(64, activation='relu')(inputs)
layer2 = Dense(64, activation='relu')(layer1)
predictions = Dense(10, activation='softmax')(layer2)
model = Model(inputs=inputs, outputs=predictions)
# Define custom loss
def custom_loss(layer):
# Create a loss function that adds the MSE loss to the mean of all squared activations of a specific layer
def loss(y_true,y_pred):
return K.mean(K.square(y_pred - y_true) + K.square(layer), axis=-1)
# Return a function
return loss
# Compile the model
model.compile(optimizer='adam',
loss=custom_loss(layer), # Call the loss function with the selected layer
metrics=['accuracy'])
# train
model.fit(data, labels)