我使用tensorflow / slim,使用具有l2正则化的conv2d层,如下所示:
net = slim.conv2d(inputs, n_filters, filter_size, weights_regularizer=slim.l2_regularizer(0.001), activation_fn=None)
为了通过交叉熵和正则化损失对总损失进行正则化,我编写了以下脚本:
loss_cross = tf.nn.softmax_cross_entropy_with_logits_v2(logits=network, labels=net_output)
beta = 0.01
regularization_losses = tf.losses.get_regularization_losses()
losses = loss_cross + beta * regularization_losses
loss = tf.reduce_mean(losses)
opt = tf.train.AdamOptimizer(lr).minimize(0.0001, var_list=[var for var in tf.trainable_variables()])
出现错误:
Traceback (most recent call last):
File "main_orgsettings_losses.py", line 288, in <module>
losses = loss_cross + beta * regularization_losses
TypeError: can't multiply sequence by non-int of type 'float'
当我移除beta
时,出现错误消息:
InvalidArgumentError: Incompatible shapes: [1,512,512] vs. [1]
输入大小中的[512,512]和[1]是唯一具有正则化的层。
当我在reduce_mean之后将损失添加正则化时:
loss = tf.reduce_mean(losses) + regularization_losses
它可以工作,但是我认为不是beta
未被使用,我认为应该在{{1}之前将reguilarization_loss
添加到losses
}