tf.train.shuffle_batch()

时间:2018-10-15 17:06:59

标签: python-3.x tensorflow

我对tensorflow框架非常陌生,我尝试使用此代码读取和探索CIFAR-10数据集。

import tensorflow as tf
import numpy as np
import os
import matplotlib.pyplot as plt

sess=tf.Session()

batch_size = 128
output_every = 50
generations = 20000
eval_every = 500
image_height = 32
image_width = 32
crop_height = 24
crop_width = 24
num_channels = 3
num_targets = 10
data_dir="CIFAR10"


image_vec_length = image_height * image_width * num_channels
record_length = 1 + image_vec_length

def read_cifar_files(filename_queue, distort_images = True):
   reader = tf.FixedLengthRecordReader(record_bytes=record_length*10)
   key, record_string = reader.read(filename_queue)
   record_bytes = tf.decode_raw(record_string, tf.uint8)

# Extract label
   image_label = tf.cast(tf.slice(record_bytes, [image_vec_length-1],[1]),tf.int32)

# Extract image
   sliced=tf.slice(record_bytes, [0],[image_vec_length])
   image_extracted = tf.reshape(sliced, [num_channels, image_height,image_width])

# Reshape image
   image_uint8image = tf.transpose(image_extracted, [1, 2, 0])
   reshaped_image = tf.cast(image_uint8image, tf.float32)

# Randomly Crop image
   final_image = tf.image.resize_image_with_crop_or_pad(reshaped_image, crop_width, crop_height)
   if distort_images:

# Randomly flip the image horizontally, change the brightness and contrast
     final_image = tf.image.random_flip_left_right(final_image)
     final_image = tf.image.random_brightness(final_image,max_delta=63)
     final_image = tf.image.random_contrast(final_image,lower=0.2, upper=1.8)

# standardization
     final_image = tf.image.per_image_standardization(final_image)
     return  final_image, image_label

当我运行以下不带tf.train.shuffle_batch()的input_pipeline()函数时,它为我提供了形状为(24,24,3)的单个图像张量。

def input_pipeline(batch_size, train_logical=True):
    files=[os.path.join(data_dir,"data_batch_{}.bin".format(i)) for i in range(1,6)]
    filename_queue = tf.train.string_input_producer(files)
    image,label = read_cifar_files(filename_queue)
    return(image,label)


example_batch,label_batch=input_pipeline(batch_size)
threads = tf.train.start_queue_runners(sess=sess)
img,label=sess.run([example_batch, label_batch])

#output=(24,24,3) 
print(img.shape) 

但是当我使用tf.train.shuffle_batch()函数运行相同的input_pipeline()函数时,它会给我图像张量,其中包含128个形状为(128、24、24、3)的图像。

def input_pipeline(batch_size, train_logical=True):
    files=[os.path.join(data_dir,"data_batch_{}.bin".format(i)) for i in range(1,6)]
    filename_queue = tf.train.string_input_producer(files)
    image,label = read_cifar_files(filename_queue)

    min_after_dequeue = 1000
    capacity = min_after_dequeue + 3 * batch_size
    example_batch, label_batch = tf.train.shuffle_batch([image,label], batch_size, capacity, min_after_dequeue)
    return(example_batch, label_batch)

那怎么可能。看来tf.train.shuffle_batch()从read_cifar_files()中获取单个图像张量并返回具有128张图像的张量。所以tf.train.shuffle_batch()函数的作用是什么。

1 个答案:

答案 0 :(得分:1)

在Tensorflow中,Tensor只是图的一个节点。 tf.train.shuffle_batch()函数将输入2个节点作为输入作为输入,这要归功于该图和数据。

因此,它不需要输入“单个图像”,而是可以加载图像的图形。然后,它向图添加新操作,该操作将在输入图执行n = batch_size时间,对批处理进行混洗,并返回大小为[bach_size,input_shape]的输出张量。

然后,当您在会话中运行函数时,将根据图形加载数据,这意味着每次调用tf.train.shuffle_batch()时,您将在磁盘上读取n = batch_size个图像。