Tensorflow数据集API评估输出形状需要10分钟以上

时间:2017-12-13 15:19:09

标签: python tensorflow tensorflow-datasets

我正在使用 Python 3.5 ,低拍 MicroSoft Celeb1M 数据集, Tensorflow 1.4 ,我想使用新数据集图像分类任务的API。

我需要构建一个包含此格式的数据集(剧集):它包含(N*k + 1)张图片,N不同类的数量,以及{{1每个类的样本数。目标是将k类中的最后一个图像分类到正确的类中,每个类由N个样本表示。

为此,我在硬盘上有16 000个tfrecords,每个大约20 MB。每个TFRecord包含一个类的图像,大约50-100个图像。

我想随机选择k个文件,然后随机选择N个图片,混合它们,然后选择最终图像,在k类中进行分类,与样本不同。为此,我在“本机”Python代码和Tensorflow数据集API方法之间进行了混合。

问题是我写的解决方案需要花费很长时间才能完成。 这是我用来创建这样一个数据集的工作代码。对于此示例,我只从硬盘驱动器中获取20个文件。

N

花费太长时间的步骤是迭代器初始化。在我的完整代码中,包括批量剧集,大​​约需要15分钟。我注意到这个问题很可能是由于评估import tensorflow as tf import os import time import numpy.random as rng #Creating a few variables data_dir = '/fastdata/Celeb1M/' test_data = [data_dir + 'test/'+ elt for elt in os.listdir(data_dir + '/test/')] # Function to decode TFRecords def read_and_decode(example_proto): features = tf.parse_single_example( example_proto, features = { 'image': tf.FixedLenFeature([], tf.string), 'label': tf.FixedLenFeature([], tf.int64), 'height': tf.FixedLenFeature([], tf.int64), 'width': tf.FixedLenFeature([], tf.int64), 'channels': tf.FixedLenFeature([], tf.int64) }) image = tf.decode_raw(features['image'], tf.uint8) image = tf.cast(image, tf.float32) * (1. / 255) height = tf.cast(features['height'], tf.int32) width = tf.cast(features['width'], tf.int32) channels = tf.cast(features['channels'], tf.int32) image = tf.reshape(image, [height, width, channels]) label = tf.cast(features['label'], tf.int32) return image, label def get_episode(classes_per_set, samples_per_class, list_files): """ :param data_pack : train, val or test :param classes_per_set : N-way classification :param samples_per_class : k-shot classification :param list_files : list of length classes_per_set of files containing examples :return : an episode containing classes_per_set * samples_per_class + 1 image to classify among the N*k other """ assert classes_per_set == len(list_files) dataset = tf.data.TFRecordDataset(list_files[-1]).map(read_and_decode) \ .shuffle(100) elt_to_classify = dataset.take(1) rng.shuffle(list_files) episode = tf.data.TFRecordDataset([list_files[-1]]) \ .map(read_and_decode) \ .shuffle(100) \ .take(1) _ = list_files.pop() for class_file in list_files: element = tf.data.TFRecordDataset([class_file]) \ .map(read_and_decode) \ .shuffle(150) \ .take(1) episode = episode.concatenate(element) episode = episode.concatenate(elt_to_classify) return episode #Testing the code episode = get_episode(20, 1, test_data) start = time.time() iterator = episode.make_one_shot_iterator() end = time.time() print("time elapsed: ", end - start) """ Result : starting to build one_shot_iterator time elapsed: 188.75095319747925 """ :只是在最后做episode.output_shapes也需要很长时间(但不到初始化迭代器)。

此外,我在Docker中工作,当迭代器初始化时,我可以看到整个步骤中CPU处于print(episode.output_shapes)

我想知道原因是原始Python代码和Tensorflow操作之间的混合,这可能会导致CPU出现瓶颈。

我认为处理数据集API包括在Tensorflow图上创建操作节点,并且仅在执行100 %时评估数据集。

有关详细信息,我尝试了:

tf.Session().run()

3小时后,它甚至没有结束。我停止了代码,这里是TraceBack(我编辑了一些重复的块,例如episode = dataset.get_episode(50, 1, test_data[:50]) iterator = episode.make_one_shot_iterator()

return self._as_variant_tensor()

所以我想知道为什么初始化迭代器需要这么长时间:我无法找到有关初始化如何工作的大量信息,以及创建图时究竟要评估的内容。

我无法通过纯KeyboardInterrupt Traceback (most recent call last) <ipython-input-8-550523c179b3> in <module>() 2 print("there") 3 start = time.time() ----> 4 iterator = episode.make_one_shot_iterator() 5 end = time.time() 6 print("time elapsed: ", end - start) ~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/data/ops/dataset_ops.py in make_one_shot_iterator(self) 110 return self._as_variant_tensor() # pylint: disable=protected-access 111 --> 112 _make_dataset.add_to_graph(ops.get_default_graph()) 113 114 return iterator_ops.Iterator( ~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/framework/function.py in add_to_graph(self, g) 484 def add_to_graph(self, g): 485 """Adds this function into the graph g.""" --> 486 self._create_definition_if_needed() 487 488 # Adds this function into 'g'. ~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/framework/function.py in _create_definition_if_needed(self) 319 """Creates the function definition if it's not created yet.""" 320 with context.graph_mode(): --> 321 self._create_definition_if_needed_impl() 322 323 def _create_definition_if_needed_impl(self): ~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/framework/function.py in _create_definition_if_needed_impl(self) 336 # Call func and gather the output tensors. 337 with vs.variable_scope("", custom_getter=temp_graph.getvar): --> 338 outputs = self._func(*inputs) 339 340 # There is no way of distinguishing between a function not returning ~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/data/ops/dataset_ops.py in _make_dataset() 108 @function.Defun(capture_by_value=True) 109 def _make_dataset(): --> 110 return self._as_variant_tensor() # pylint: disable=protected-access 111 112 _make_dataset.add_to_graph(ops.get_default_graph()) ~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/data/ops/dataset_ops.py in _as_variant_tensor(self) 998 # pylint: disable=protected-access 999 return gen_dataset_ops.concatenate_dataset( -> 1000 self._input_dataset._as_variant_tensor(), 1001 self._dataset_to_concatenate._as_variant_tensor(), 1002 output_shapes=nest.flatten(self.output_shapes), ~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/data/ops/dataset_ops.py in output_shapes(self) 1006 @property 1007 def output_shapes(self): -> 1008 return nest.pack_sequence_as(self._input_dataset.output_shapes, [ 1009 ts1.most_specific_compatible_shape(ts2) 1010 for (ts1, ts2) in zip( ~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/data/ops/dataset_ops.py in output_shapes(self) 1009 ts1.most_specific_compatible_shape(ts2) 1010 for (ts1, ts2) in zip( -> 1011 nest.flatten(self._input_dataset.output_shapes), 1012 nest.flatten(self._dataset_to_concatenate.output_shapes)) 1013 ]) ~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/data/ops/dataset_ops.py in output_shapes(self) 1009 ts1.most_specific_compatible_shape(ts2) 1010 for (ts1, ts2) in zip( -> 1011 nest.flatten(self._input_dataset.output_shapes), 1012 nest.flatten(self._dataset_to_concatenate.output_shapes)) 1013 ]) ~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/data/util/nest.py in pack_sequence_as(structure, flat_sequence) 239 return flat_sequence[0] 240 --> 241 flat_structure = flatten(structure) 242 if len(flat_structure) != len(flat_sequence): 243 raise ValueError( ~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/data/util/nest.py in flatten(nest) 133 A Python list, the flattened version of the input. 134 """ --> 135 return list(_yield_flat_nest(nest)) if is_sequence(nest) else [nest] 136 137 ~/miniconda2/envs/dljupyter/lib/python3.5/site-packages/tensorflow/python/data/util/nest.py in is_sequence(seq) 118 """ 119 return (isinstance(seq, (_collections.Sequence, dict)) --> 120 and not isinstance(seq, (list, _six.string_types))) 121 122 KeyboardInterrupt: 方法实现我想要的功能,但我还没有尝试tf.data.Dataset方法(在this thread中使用)。< / p>

1 个答案:

答案 0 :(得分:1)

代码非常昂贵,因为它循环遍历Python中的16000个文件,在图中创建O(16000)个节点。但是,您可以使用Output: 2, 6, 10, 14, 18, 22, 26, 30, Output Needed: 2, 6, 10, 14, 18, 22, 26, 30 将循环移动到图表中来避免这种情况:

Dataset.flat_map()