假设你有一个带有n
神经元的Keras模型作为输出,其中每个神经元与一个回归变量(例如汽车的速度,汽车的高度......)相关联,如下所示代码段:
# define Keras model
input_layer = Input(shape=shape)
... # e.g. conv layers
x = Dense(n, activation='linear')(x)
model = Model(inputs=input_layer, outputs=x)
model.compile(loss='mean_absolute_error', optimizer='sgd', metrics=['mean_squared_error'])
history = model.fit_generator(...)
现在,存储在历史字典中的MAE损失是一个数字,它是根据n
- 维y_pred
和y_true
计算的
阵列。因此,单个损失值是n
标签的单个损失的平均值,因为它可以在Keras MAE函数中看到:
def mean_absolute_error(y_true, y_pred):
return K.mean(K.abs(y_pred - y_true), axis=-1)
但是,我想获取一个历史对象,其中包含每个n
标签的损失,即{loss: {'speed': loss_value_speed, 'height': loss_value_height}}
。理想情况下,培训期间的进度条也应显示个人损失而不是合并损失。
我该怎么做?
我认为可以为每个输出神经元编写一个自定义指标,它只计算y_pred
和y_true
向量中单个索引的损失,但这感觉就像一个解决方法:
def mean_absolute_error_label_0(y_true, y_pred):
# calculate the loss only for the first label, label_0
return K.mean(K.abs(y_pred[0] - y_true[0]), axis=-1)
答案 0 :(得分:1)
一种可能的解决方案是为每个目标使用单独的输出层,并为每个目标分配name
(即Dense(1, name='...')
)。在你的情况下,它与使用Dense(n)
输出层的训练相同,因为总损失只是个人损失的总和。
例如,
input_layer = Input(shape=(1000,))
x = Dense(100)(input_layer)
# name each output layer
target_names = ('speed', 'height')
outputs = [Dense(1, name=name)(x) for name in target_names]
model = Model(inputs=input_layer, outputs=outputs)
model.compile(loss='mean_absolute_error', optimizer='sgd', metrics=['mean_squared_error'])
现在,当您适应模型时,您应该能够分别看到每个目标的损失(和指标)。
X = np.random.rand(10000, 1000)
y = [np.random.rand(10000) for _ in range(len(outputs))]
history = model.fit(X, y, epochs=3)
Epoch 1/1
10000/10000 [==============================] - 1s 127us/step - loss: 0.9714 - speed_loss: 0.4768 - height_loss: 0.4945 - speed_mean_squared_error: 0.5253 - height_mean_squared_error: 0.5939
Epoch 2/3
10000/10000 [==============================] - 1s 101us/step - loss: 0.5109 - speed_loss: 0.2569 - height_loss: 0.2540 - speed_mean_squared_error: 0.0911 - height_mean_squared_error: 0.0895
Epoch 3/3
10000/10000 [==============================] - 1s 107us/step - loss: 0.5040 - speed_loss: 0.2529 - height_loss: 0.2511 - speed_mean_squared_error: 0.0873 - height_mean_squared_error: 0.0862
保存到返回的history
对象的损失也将被命名。
print(history.history)
{'height_loss': [0.49454938204288484, 0.2539591451406479, 0.25108356306552887],
'height_mean_squared_error': [0.5939331066846848,
0.08951960142850876,
0.08619525188207626],
'loss': [0.9713814586639404, 0.5108571118354798, 0.5040025643348693],
'speed_loss': [0.47683207807540895, 0.25689796624183653, 0.25291900217533114],
'speed_mean_squared_error': [0.5252606071352959,
0.09107607080936432,
0.0872862442612648]}
编辑:如果输出height
的丢失取决于speed
的值,您可以:
Concatenate
图层命名为" height",这将是历史记录对象中height
的输出model.compile()
提供两个损失函数(一个用于speed
,一个用于连接输出height
)def custom_loss(y_true, y_pred):
y_pred_height = y_pred[:, 0]
y_pred_speed = y_pred[:, 1]
# some loss which depends on the value of `speed`
loss = losses.mean_absolute_error(y_true, y_pred_height * y_pred_speed)
return loss
input_layer = Input(shape=(1000,))
x = Dense(100, activation='relu')(input_layer)
output_speed = Dense(1, activation='relu', name='speed')(x)
output_height = Dense(1, activation='relu')(x)
output_merged = Concatenate(name='height')([output_height, output_speed])
model = Model(inputs=input_layer, outputs=[output_speed, output_merged])
model.compile(loss={'speed': 'mean_absolute_error', 'height': custom_loss},
optimizer='sgd',
metrics={'speed': 'mean_squared_error'})
输出将是:
X = np.random.rand(10000, 1000)
y = [np.random.rand(10000), np.random.rand(10000)]
history = model.fit(X, y, epochs=3)
Epoch 1/3
10000/10000 [==============================] - 5s 501us/step - loss: 1.0001 - speed_loss: 0.4976 - height_loss: 0.5026 - speed_mean_squared_error: 0.3315
Epoch 2/3
10000/10000 [==============================] - 2s 154us/step - loss: 0.9971 - speed_loss: 0.4960 - height_loss: 0.5011 - speed_mean_squared_error: 0.3285
Epoch 3/3
10000/10000 [==============================] - 1s 149us/step - loss: 0.9971 - speed_loss: 0.4960 - height_loss: 0.5011 - speed_mean_squared_error: 0.3285.
print(history.history)
{'height_loss': [0.502568191242218, 0.5011419380187988, 0.5011419407844544],
'loss': [1.0001451692581176, 0.9971360887527466, 0.9971360870361328],
'speed_loss': [0.4975769768714905, 0.4959941484451294, 0.4959941472053528],
'speed_mean_squared_error': [0.33153974375724793,
0.32848617186546325,
0.32848617215156556]}