我有一个图像的名称和标签作为列表,我想得到一批64个图像/标签。我可以以正确的方式获得图像,但对于标签,其尺寸为(64,8126)。每列具有64次相同的元素。并且行包含8126个原始标签值而不会被洗牌。
我理解每个图像tf.train.shuffle_batch都考虑8126元素标签向量的问题。但是,我如何只为每个图像传递单个元素?
def _get_images(shuffle=True):
"""Gets the images and labels as a batch"""
#get image and label list
_img_names,_img_class = _get_list() #list of image names and labels
filename_queue = tf.train.string_input_producer(_img_names)
#reader
image_reader = tf.WholeFileReader()
_, image_file = image_reader.read(filename_queue)
#decode jpeg
image_original = tf.image.decode_jpeg(image_file)
label_original = tf.convert_to_tensor(_img_class,dtype=tf.int32)
#print label_original
#image preprocessing
image = tf.image.resize_images(image_original, [224,224])
float_image = tf.cast(image,dtype=tf.float32)
float_image = tf.image.per_image_standardization(image)
#set the shape
float_image.set_shape((224, 224, 3))
#label_original.set_shape([8126]) #<<<<<=========== causes (64,8126) dimension label without shuffle
#parameters for shuffle
batch_size = 64
num_preprocess_threads = 16
num_examples_per_epoch = 8000
min_fraction_of_examples_in_queue = 0.4
min_queue_examples = int(num_examples_per_epoch *
min_fraction_of_examples_in_queue)
if shuffle:
images_batch, label_batch = tf.train.shuffle_batch(
[float_image,label_original],
batch_size=batch_size,
num_threads=num_preprocess_threads,
capacity=min_queue_examples + 3 * batch_size,
min_after_dequeue=min_queue_examples)
else:
images_batch, label_original = tf.train.batch(
[float_image,_img_class],
batch_size=batch_size,
num_threads=num_preprocess_threads,
capacity=min_queue_examples + 3 * batch_size)
return images_batch,label_batch
答案 0 :(得分:0)
您可以使用tf.train.slice_input_producer
# here _img_class should be a list
labels_queue = tf.train.slice_input_producer([_img_class])
...
images_batch, label_batch = tf.train.shuffle_batch(
[float_image,labels_queue],
batch_size=batch_size,
num_threads=num_preprocess_threads,
capacity=min_queue_examples + 3 * batch_size,
min_after_dequeue=min_queue_examples)