Tensorflow batch_join:运行结果乱序

时间:2018-11-13 17:46:06

标签: python tensorflow

我有一个典型的图像处理队列,只有很少的踏步,

NTHREADS = 4
images_and_labels = []
for _ in range(NTHREADS):
    filenames = input_queue.dequeue()
    images = load_images(filenames, options)
    images_and_labels.append([images, filenames])

with tf.Session(config=config) as sess:
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(sess=sess, coord=coord)
    image_batch, label_batch = tf.train.batch_join(images_and_labels, batch_size=100, enqueue_many=True)
...
    image_paths_array = np.expand_dims(np.array(file_list), 1)
    sess.run(enqueue_op, feed_dict={image_paths_placeholder: image_paths_array})
    labels, = sess.run([label_batch])

现在,我查看标签,并发现它们略有乱序。每次运行我的小程序时,我都会得到明显不同的命令。没关系,我知道这就是多线程。我只想衡量这种“乱序”:

n = 0
for i in labels.argsort():
    max_outoforder = max(max_outoforder, abs(i-n))
    n += 1
print (len(threads), max_outoforder)

很自然地,我看到两个印刷数字之间有很强的相关性。就是

len(threads) max(max_outoforder)
    2             0
    3             3
    5             5
    9             9

这是一种巧合还是硬性规定,这种重新排序永远不会超过队列运行器的数量?

0 个答案:

没有答案