在尝试将批量从Tensorflow数据集API传递到会话操作时,为什么会出现形状错误?

时间:2018-05-09 20:07:20

标签: tensorflow tensorflow-datasets

我正在处理转换到数据集API的问题,我想我还没有足够的API经验来了解如何处理以下情况。我们目前使用排队和批处理进行图像增强。我的任务是检查新的数据集API并使用它而不是队列转换现有的实现。

我们想要做的是获取对所有路径的引用,并从该引用处理所有操作。正如您在数据集初始化中看到的,我已将parse_fn映射到数据集本身,然后继续读取文件并从文件名中提取初始值。然而,当我接着调用迭代器next_batch方法然后将这些值传递给get_summary时,我现在在形状周围出现错误。我一直在尝试一些不断改变错误的事情,所以我觉得我应该看看是否有人在SO上看到我可能认为这一切都错了,应该走另一条路。在使用数据集API时,有什么事情是绝对错误的吗?

我不应该再以这种方式打电话吗?我注意到我看到他们将获得批处理的大部分示例,将变量传递给op,然后在变量中捕获它并将其传递给sess.run,但是我还没有找到一种简单的方法来执行此操作。但是我们的设置没有错误,所以这是我采取的方法(但它仍然是错误的)。我将继续尝试追查问题并在此处发布我应该找到什么,但如果有人看到了什么请告诉我。谢谢!

当前错误:

  

...在get_summary摘要中,acc = sess.run([self._summary_op,   self._accuracy],feed_dict = feed_dict)ValueError:无法提供值   形状(32,)为Tensor'ph_input_labels:0',其形状为'(?,1)

下面是调用get_summary方法并触发错误的块:

def perform_train():
    if __name__ == '__main__':
        #Get all our image paths
        filenames = data_layer_train.get_image_paths()
        next_batch, iterator = preproc_image_fn(filenames=filenames)

    with tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) as sess:
        with sess.graph.as_default():
            # Set the random seed for tensorflow
            tf.set_random_seed(cfg.RNG_SEED)

            classifier_network = c_common.create_model(len(products_to_class_dict), is_training=True)
            optimizer, global_step_var = c_common.create_optimizer(classifier_network)

            sess.run(tf.local_variables_initializer())
            sess.run(tf.global_variables_initializer())

            # Init tables and dataset iterator
            sess.run(tf.tables_initializer())
            sess.run(iterator.initializer)

            cur_epoch = 0
            blobs = None
            try:
                epoch_size = data_layer_train.get_steps_per_epoch()
                num_steps = num_epochs * epoch_size
                for step in range(num_steps):
                    timer_summary.tic()
                    if blobs is None:
                        #Now populate from our training dataset
                        blobs = sess.run(next_batch)

                    # *************** Below is where it is erroring *****************
                    summary_train, acc = classifier_network.get_summary(sess, blobs["images"], blobs["labels"], blobs["weights"])

            ...

相信错误发生在preproc_image_fn:

def preproc_image_fn(filenames, images=None, labels=None, image_paths=None, cells=None, weights=None):
    def _parse_fn(filename, label, weight):
        augment_instance = False
        paths=[]
        selected_cells=[]
        if vals.FIRST_ITER:
            #Perform our check of the path to see if _data_augmentation is within it
            #If so set augment_instance to true and replace the substring with an empty string
            new_filename = tf.regex_replace(filename, "_data_augmentation", "")
            contains = tf.equal(tf.size(tf.string_split([filename], "")), tf.size(tf.string_split([new_filename])))
            filename = new_filename
            if contains is True:
                augment_instance = True

        core_file = tf.string_split([filename], '\\').values[-1]
        product_id = tf.string_split([core_file], ".").values[0]

        label = search_tf_table_for_entry(product_id)
        weight = data_layer_train.get_weights(product_id)

        image_string = tf.read_file(filename)
        img = tf.image.decode_image(image_string, channels=data_layer_train._channels)
        img.set_shape([None, None, None])
        img = tf.image.resize_images(img, [data_layer_train._target_height, data_layer_train._target_width])
        #Previously I was returning the below, but I was getting an error from the op when assigning feed_dict stating that it didnt like the dictionary
        #retval = dict(zip([filename], [img])), label, weight
        retval = img, label, weight
        return retval

    num_files = len(filenames)
    filenames = tf.constant(filenames)

    #*********** Setup dataset below ************
    dataset = tf.data.Dataset.from_tensor_slices((filenames, labels, weights))
    dataset=dataset.map(_parse_fn)
    dataset = dataset.repeat()
    dataset = dataset.batch(32)
    iterator = dataset.make_initializable_iterator()

    batch_features,  batch_labels, batch_weights = iterator.get_next()
    return {'images': batch_features, 'labels': batch_labels, 'weights': batch_weights}, iterator

def search_tf_table_for_entry(self, product_id):
    '''Looks up keys in the table and outputs the values. Will return -1 if not found '''
    if product_id is not None:
        return self._products_to_class_table.lookup(product_id)
    else:
        if not self._real_eval:
            logger().info("class not found in training {} ".format(product_id))
        return -1

我创建模型并使用之前使用过的占位符:

...
 def create_model(self):
    weights_regularizer = tf.contrib.layers.l2_regularizer(cfg.TRAIN.WEIGHT_DECAY)
    biases_regularizer = weights_regularizer

    # Input data.
    self._input_images = tf.placeholder(
        tf.float32, shape=(None, self._image_height, self._image_width, self._num_channels), name="ph_input_images")
    self._input_labels = tf.placeholder(tf.int64, shape=(None, 1), name="ph_input_labels")
    self._input_weights = tf.placeholder(tf.float32, shape=(None, 1), name="ph_input_weights")
    self._is_training = tf.placeholder(tf.bool, name='ph_is_training')
    self._keep_prob = tf.placeholder(tf.float32, name="ph_keep_prob")
    self._accuracy = tf.reduce_mean(tf.cast(self._correct_prediction, tf.float32))
    ...
    self.create_summaries()

def create_summaries(self):
    val_summaries = []
    with tf.device("/cpu:0"):
        for var in self._act_summaries:
            self._add_act_summary(var)
        for var in self._train_summaries:
            self._add_train_summary(var)

    self._summary_op = tf.summary.merge_all()
    self._summary_op_val = tf.summary.merge(val_summaries)

def get_summary(self, sess, images, labels, weights):
    feed_dict = {self._input_images: images, self._input_labels: labels,
                 self._input_weights: weights, self._is_training: False}

    summary, acc = sess.run([self._summary_op, self._accuracy], feed_dict=feed_dict)

    return summary, acc

1 个答案:

答案 0 :(得分:1)

由于错误说明:

  

无法为Tensor' ph_input_labels:0'提供形状值(32,),其形状为'(?,1)

我的猜测是labels get_summary的形状为[32]。你能把它重塑成(32,1)吗?或者可能在_parse_fn之前重塑标签?