张量流中的tf.train.batch_join()函数如何工作?

时间:2017-07-28 19:26:07

标签: tensorflow deep-learning

我正在尝试在tensorflow中训练神经网络。我使用tf.train.batch_join()函数加载数据及其标签。我做这样的事情:

image_batch, label_batch, image_batch_f = tf.train.batch_join(
        images_and_labels, batch_size=batch_size_placeholder,
        #shapes=[(args.image_size, args.image_size, 3), ()], enqueue_many=True,
        shapes=[(args.image_height, args.image_width, 3), (), (args.image_height, args.image_width, 3)], enqueue_many=True,
        capacity=4 * nrof_preprocess_threads * args.batch_size,
        allow_smaller_final_batch=True)
    image_batch = tf.identity(image_batch, 'image_batch')
    image_batch = tf.identity(image_batch, 'input')
    label_batch = tf.identity(label_batch, 'label_batch')
    image_batch_f = tf.identity(image_batch_f, 'flipped_images_batch')

在这里,我获得了三批数据。一批图像,一批标签和一批与图像批次相同的图像的翻转图像。我想提取一批图像和翻转图像的功能。下面的行通过网络传递批量数据。

    # Build the inference graph
    prelogits, _ = network.inference(image_batch, args.keep_probability,
        phase_train=phase_train_placeholder, feature_dimension=args.embedding_size,
        weight_decay=args.weight_decay)


    features = tf.nn.l2_normalize(prelogits, 1, 1e-10, name='embeddings')

    #getting the flipped embeddings
    prelogits_f, _ = network.inference(image_batch_f,args.keep_probability,
                    phase_train=phase_train_placeholder,feature_dimension=args.embedding_size,
                    weight_decay=args.weight_decay,reuse=True)
    features_flipped_images = tf.nn.l2_normalize(prelogits_f,1,1e-10,name='embeddings_f')

为了获得这两个功能,我在features和features_flipped_images操作上运行了session.run()。像这样:

feed_dict = {phase_train_placeholder:False, batch_size_placeholder:batch_size}
emb, emb_f = sess.run([features, features_flipped_images],feed_dict=feed_dict)

我的问题如下。我猜测当我在功能上运行会话时,就是当batch_join函数将调度一批大小为batch_size的图像时。但是当我在features_flipped_images上执行session.run()时,该函数还将从batch_join函数中获取一批翻转图像。在执行features_flipped_images时,batch_join函数是否会调度一批新的翻转图像?或者它是在执行功能时生成的同一批翻转图像?如果没有,那我该怎么做?我想提取一批图像和一批翻转图像的功能。

1 个答案:

答案 0 :(得分:1)

我的猜测是每次运行[features,features_flipped_images]只会得到同一批数据。我们来举个例子:

imgs_batch,labels_batch = tf.train.batch([img, label]...)

然后,如果你想看看批次中的内容:

imgs_data, labels_data = sess.run([imgs_batch, labels_batch])

你看,当你运行sess.run([features,features_flipped_images],...)时,它是类似的。我不认为你会得到两批,否则,imgs_data和labels_data彼此不对应。