Tensorflow图层API CNN参数在训练期间不会更改

时间:2017-07-16 15:54:39

标签: python tensorflow

我是Tensorflow的新手,所以这个问题可能真的很傻。

我一直在尝试使用Tensorflow为MNIST手写数字数据集编写一个简单的CNN。问题是优化器不会更新参数(由Tensorboard摘要监控) 图表似乎没问题,即使Layers API创建的范围看起来很奇怪。从每个层计算梯度。 请帮助!

我正在使用此处的培训数据:http://yann.lecun.com/exdb/mnist/

这是代码

import tensorflow as tf

DATA = 'train-images.idx3-ubyte'
LABELS = 'train-labels.idx1-ubyte'
NUM_EPOCHS = 2
BATCH_SIZE = 15
#Data definition
data_queue = tf.train.string_input_producer([DATA,])
label_queue = tf.train.string_input_producer([LABELS,])

reader_data = tf.FixedLengthRecordReader(record_bytes=28*28, header_bytes = 16)
reader_labels = tf.FixedLengthRecordReader(record_bytes=1, header_bytes = 8)

(_,data_rec) = reader_data.read(data_queue)
(_,label_rec) = reader_labels.read(label_queue)

image = tf.decode_raw(data_rec, tf.uint8)
image = tf.reshape(image, [28, 28, 1])
label = tf.decode_raw(label_rec, tf.uint8)
label = tf.reshape(label, [1])

image_batch, label_batch = tf.train.shuffle_batch([image, label],
                                                 batch_size=BATCH_SIZE,
                                                 capacity=100,
                                                 min_after_dequeue = 30)
#Layers definition
conv = tf.layers.conv2d(
  inputs=tf.cast(image_batch, tf.float32),
  filters=15,
  kernel_size=[5,5],
  padding='same',
  activation=tf.nn.relu)

conv1 = tf.layers.conv2d(
  inputs=conv,
  filters=15,
  kernel_size=[3,3],
  padding='same',
  activation=tf.nn.relu)

pool_flat = tf.reshape(conv1, [BATCH_SIZE, -1])

dense1 = tf.layers.dense(inputs=pool_flat, units=30, activation=tf.nn.relu)

output = tf.nn.softmax(tf.layers.dense(inputs=dense1, units=10))

#train operation definition
onehot_labels = tf.one_hot(indices=tf.cast(tf.reshape(label_batch,[-1]), tf.int32), depth=10)

loss = tf.losses.softmax_cross_entropy(onehot_labels=onehot_labels,
                                       logits=output)

global_step = tf.Variable(0,name='global_step',trainable=False)
train_op = tf.train.GradientDescentOptimizer(0.05).minimize(loss, global_step = global_step)

#Summaries definition

for var in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='conv2d'):
    tf.summary.histogram(var.name, var)
for var in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='conv2d_1'):
    tf.summary.histogram(var.name, var)
for var in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='dense'):
    tf.summary.histogram(var.name, var)
for var in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='dense_1'):
    tf.summary.histogram(var.name, var)
tf.summary.image("inp", image_batch, max_outputs =1)
loss_summary = tf.summary.scalar("loss", loss)
summaries = tf.summary.merge_all()

#init
sess = tf.Session()
summary_writer = tf.summary.FileWriter('log_simple_stats', sess.graph)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord, sess=sess)
sess.run(tf.global_variables_initializer())

#loop
for i in range((60000*NUM_EPOCHS)//BATCH_SIZE):
    sess.run(train_op)
    if(i%100):
        merged = sess.run(summaries)
        summary_writer.add_summary(merged, i)


coord.request_stop()
coord.join(threads)

EDIT 自定义图层会产生相同的结果。

自定义图层定义:

def convol(input, inp, outp, name="conv"):
    with tf.name_scope(name):
        w = tf.Variable(tf.truncated_normal([5, 5, inp, outp], stddev=0.1),name="W")
        b = tf.Variable(tf.constant(0.1, shape=[outp]), name="B")
        filtered = tf.nn.conv2d(input, w, strides=[1,1,1,1], padding="SAME", name="conv2d")
        activation = tf.nn.relu(features=(filtered+b), name="activation")
        tf.summary.histogram(name=w.name, values=w)
        tf.summary.histogram(name=b.name, values=b)
        tf.summary.histogram(name=activation.name, values=activation)
        return activation

def dense(input, inp, outp, name="dense"):
    with tf.name_scope(name):
        w = tf.Variable(tf.truncated_normal([inp, outp], stddev=0.1), name="W")
        b = tf.Variable(tf.constant(0.1, shape=[outp]), name="B")
        act = tf.matmul(input, w) + b
        tf.summary.histogram(name=w.name, values=w)
        tf.summary.histogram(name=b.name, values=b)
        tf.summary.histogram(name="activation", values=act)
        return act

编辑:

所以经过一段时间搞乱这个和MNIST的例子,我注意到没有学习权重。我处理数据读取的方式搞砸了一些关于梯度计算的东西。我刚刚录制了将MNIST数据集读入我的代码的类,它可以100%运行,不需要对参数进行任何调整。

1 个答案:

答案 0 :(得分:0)

我的回忆录也遇到了同样的问题,对我而言,原因只是以下几个方面的结合:

  • 迭代次数不足:变更可能需要一段时间才能看到(数万次迭代)
  • 模型变得复杂:大幅减少首次测试的过滤器数量,然后慢慢增加它们以适合您的用例,以确保它不是其他东西。

为了获得更好的调试,尝试使用Tensorboard可视化您的过滤器,这个要点帮了我很多:

https://gist.github.com/kukuruza/03731dc494603ceab0c5

您的任何一种方法(tf.layers和手动创建的变量)都应该正确连接到train_op,所以我认为这不会有问题。