我正在尝试将这篇文章用于声音分类:https://raw.githubusercontent.com/karoldvl/paper-2015-esc-convnet/master/Poster/MLSP2015-poster-page-1.gif
在该论文中提到,每层添加0.001L2重量衰减。但是,我无法弄清楚如何在Tensorflow中这样做。
我发现了一个使用tf.nn.l2_loss
的类似问题(How to define weight decay for individual layers in TensorFlow?),但目前尚不清楚如何将此方法用于我的网络。此外,它在0.001
中没有tf.nn.l2_loss
的参数。
我的网络:
net = tf.layers.conv2d(inputs=x, filters=80, kernel_size=[57, 6], strides=[1, 1], padding="same", activation=tf.nn.relu)
print(net)
net = tf.layers.max_pooling2d(inputs=net, pool_size=[4, 3], strides=[1, 3])
print(net)
net = tf.layers.dropout(inputs=net, rate=keep_prob)
print(net)
net = tf.layers.conv2d(inputs=net, filters=80, kernel_size=[1, 3], strides=[1, 1], padding="same", activation=tf.nn.relu)
print(net)
net = tf.layers.max_pooling2d(inputs=net, pool_size=[1, 3], strides=[1, 3])
print(net)
net = tf.layers.flatten(net)
print(net)
# Dense Layer
net = tf.layers.dense(inputs=net, units=5000, activation=tf.nn.relu)
print(net)
net = tf.layers.dropout(inputs=net, rate=keep_prob)
print(net)
net = tf.layers.dense(inputs=net, units=5000, activation=tf.nn.relu)
print(net)
net = tf.layers.dropout(inputs=net, rate=keep_prob)
print(net)
logits = tf.layers.dense(inputs=net, units=num_classes)
print("logits: ", logits)
输出:
Tensor("Model/conv2d/Relu:0", shape=(?, 530, 129, 80), dtype=float32)
Tensor("Model/max_pooling2d/MaxPool:0", shape=(?, 527, 43, 80), dtype=float32)
Tensor("Model/dropout/Identity:0", shape=(?, 527, 43, 80), dtype=float32)
Tensor("Model/conv2d_2/Relu:0", shape=(?, 527, 43, 80), dtype=float32)
Tensor("Model/max_pooling2d_2/MaxPool:0", shape=(?, 527, 14, 80), dtype=float32)
Tensor("Model/flatten/Reshape:0", shape=(?, 590240), dtype=float32)
Tensor("Model/dense/Relu:0", shape=(?, 5000), dtype=float32)
Tensor("Model/dropout_2/Identity:0", shape=(?, 5000), dtype=float32)
Tensor("Model/dense_2/Relu:0", shape=(?, 5000), dtype=float32)
Tensor("Model/dropout_3/Identity:0", shape=(?, 5000), dtype=float32)
logits: Tensor("Model/dense_3/BiasAdd:0", shape=(?, 20), dtype=float32)
我找到了本文的实现:https://github.com/karoldvl/paper-2015-esc-convnet/blob/master/Code/_Networks/Net-DoubleConv.ipynb但是,它位于pylearn2
。
如何在代码中添加0.001 L2重量衰减?
答案 0 :(得分:2)
要向conv2d
图层添加正则化,请使用kernel_regularizer
参数,即为您的网络实现0.001
丢失
net = tf.layers.conv2d(inputs=x, filters=80, kernel_size=[57, 6], strides=[1,1], padding="same", activation=tf.nn.relu, kernel_regularizer=tf.contrib.layers.l2_regularizer(0.001))