slim inception_v4 retrain ValueError:所有形状必须完全定义

时间:2017-07-04 22:30:54

标签: tensorflow

我收到以下错误

ValueError: All shapes must be fully defined: [TensorShape([Dimension(299), Dimension(299), Dimension(3)]), TensorShape([Dimension(None)])]

用in slim训练inception_v4。

完整追溯

Traceback (most recent call last):
  File "../models/slim/train_vienna_classifier.py", line 575, in <module>
    tf.app.run()
  File "/home/osman/anaconda2/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 44, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "../models/slim/train_vienna_classifier.py", line 441, in main
    capacity=5 * FLAGS.batch_size)
  File "/home/osman/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/input.py", line 872, in batch
    name=name)
  File "/home/osman/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/input.py", line 658, in _batch
    capacity=capacity, dtypes=types, shapes=shapes, shared_name=shared_name)
  File "/home/osman/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/data_flow_ops.py", line 685, in __init__
    shapes = _as_shape_list(shapes, dtypes)
  File "/home/osman/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/data_flow_ops.py", line 77, in _as_shape_list
    raise ValueError("All shapes must be fully defined: %s" % shapes)
ValueError: All shapes must be fully defined: [TensorShape([Dimension(299), Dimension(299), Dimension(3)]), TensorShape([Dimension(None)])]

代码

with tf.device(deploy_config.inputs_device()):
  provider = slim.dataset_data_provider.DatasetDataProvider(
      dataset,
      num_readers=FLAGS.num_readers,
      common_queue_capacity=20 * FLAGS.batch_size,
      common_queue_min=10 * FLAGS.batch_size)
  [image, label] = provider.get(['image', 'label'])
  label -= FLAGS.labels_offset

  train_image_size = FLAGS.train_image_size or network_fn.default_image_size

  image = image_preprocessing_fn(image, train_image_size, train_image_size)
  images, labels = tf.train.batch(
      [image, label],
      batch_size=FLAGS.batch_size,
      num_threads=FLAGS.num_preprocessing_threads,
      capacity=5 * FLAGS.batch_size)
  labels = slim.one_hot_encoding(
      labels, dataset.num_classes - FLAGS.labels_offset)
  batch_queue = slim.prefetch_queue.prefetch_queue(
      [images, labels], capacity=2 * deploy_config.num_clones)

虽然数据集中图像的大小不同,但我使用给定的预处理函数来调整它们的大小,因此它不应该返回错误。我对么?

1 个答案:

答案 0 :(得分:1)

问题不在于图片,而是labels因为其形状未定义:[TensorShape([Dimension(299), Dimension(299), Dimension(3)]), TensorShape([Dimension(None)])]。第二张量维度显示为None。因此,将标签设置为正确的形状应该可以解决此问题。

使用tf.reshape()功能设置标签的形状。