计算tf.nn.sparse_softmax_cross_entropy_with_logits()

时间:2016-10-24 15:43:05

标签: tensorflow

我尝试使用Tensorflow + tfrecords + tf.train.shuffle_batch()来实现简单的CNN。 图像为64 x 64,标签为int32,表示该类。

我的读取和解码功能如下所示:

  def read_and_decode(filename_queue):
    reader = tf.TFRecordReader()
    _, serialized_example = reader.read(filename_queue)
   features = tf.parse_single_example(
   serialized_example,
   # Defaults are not specified since both keys are required.
   features={
       'image_raw': tf.FixedLenFeature([], tf.string),
       'label': tf.FixedLenFeature([], tf.int64),
   })


  image = tf.decode_raw(features['image_raw'], tf.uint8)
  image.set_shape([64*64*3])
  image = tf.reshape(image,[64,64,3])

  # Convert from [0, 255] -> [-0.5, 0.5] floats.
  image = tf.cast(image, tf.float32) * (1. / 255) - 0.5
  image = tf.cast(image, tf.float32)
  # Convert label from a scalar uint8 tensor to an int32 scalar.
  label = tf.cast(features['label'], tf.int32)
  return image,label

我使用此功能创建和提供批次:

filename_queue = tf.train.string_input_producer(
    [filename], num_epochs=100,shuffle=True)

image,label = read_and_decode(filename_queue)


min_after_dequeue = 1000
batch_size = 8
capacity =  min_after_dequeue + 3 * batch_size

images_batch, label_batch = tf.train.shuffle_batch(
    [image, label], batch_size=batch_size,
    enqueue_many=False, shapes=None,
    allow_smaller_final_batch=True,
    capacity=capacity,
    min_after_dequeue=min_after_dequeue)

因此,我的网络的第一层就像images_batch一样供给..

# 1st layer
W_conv1 = weight_variable([5,5,3,64])
b_conv1 = bias_variable([64])

#input is images_batch
h_conv1 = tf.nn.relu(conv2d(images_batch,W_conv1) + b_conv1)
h_pool1 = max_pool_2X2(h_conv1)
...
...
# readout layer
W_fc2 = weight_variable([2048,200])
b_fc2 = bias_variable([200])
y_conv=tf.matmul(h_fc1,W_fc2) + b_fc2

# Define loss and optimizer
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(y_conv, label_batch)

我的训练循环如下:

init = tf.initialize_all_variables()
sess.run(init)
sess.run(tf.initialize_local_variables())

coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)

_iter = 0
try:
    while not coord.should_stop():
        # Run training steps or whatever
        #print(label_batch.eval())
        _iter += 1
        _, loss_val,acc = sess.run([train_op, loss_mean])
        print _iter, loss_val
        #assert not np.isnan(loss_val), _iter
except tf.errors.OutOfRangeError:
    print('Done training -- epoch limit reached')
finally:
    # When done, ask the threads to stop.
    coord.request_stop()

    #Wait for threads to finish.
    coord.join(threads)
    sess.close()

好的,所以当我执行此操作时,TensorFlow会给我一个错误

InvalidArgumentError (see above for traceback): logits and labels must have the same first dimension, got logits shape [2,200] and labels shape [8]

似乎正在正确地输入标签,但不是image_batch。在这里,我的batch_size是8.如果我尝试其他batch_size的倍数,y_conv总是小于定义的batch_size

为什么会这样?

0 个答案:

没有答案