张量流转移学习:如何做批量而不是一一对应

时间:2018-07-09 20:21:52

标签: python tensorflow machine-learning computer-vision transfer-learning

在tensorflow转移学习retrain.py示例中,他们一张一张地获得每个图像的瓶颈值:

image_data = tf.gfile.FastGFile(image_path, 'rb').read()
...
bottleneck_values = run_bottleneck_on_image(sess, image_data, 
jpeg_data_tensor, decoded_image_tensor,resized_input_tensor, 
bottleneck_tensor)

在run_bottleneck_on_image中,对于每个image_data,它们执行以下操作:

# First decode the JPEG image, resize it, and rescale the pixel values.
resized_input_values = sess.run(decoded_image_tensor,
                              {image_data_tensor: image_data})
# Then run it through the recognition network.
bottleneck_values = sess.run(bottleneck_tensor,
                           {resized_input_tensor: resized_input_values})
bottleneck_values = np.squeeze(bottleneck_values)
return bottleneck_values

是否有一种方法可以一次获取BATCH大小的图像的瓶颈值,而不是一个个地运行它,而使用GPU则要慢得多且效率低下?

0 个答案:

没有答案