张量流模型评估基于批量大小

时间:2016-07-06 18:17:28

标签: python machine-learning tensorflow

我在TensorFlow中有一个图表,我根据数百个时期的32个观察批量大小进行了训练。我现在想要根据训练过的图表预测一些新数据,以便我保存并重新加载它但是我被迫总是传递与我的批量大小相同的观察量,因为我'我在图表中声明了一个与批量大小相对应的占位符。如何让我的图表接受任何数量的观察?

我该如何配置这个以便我可以训练任何数量的观察结果,然后再运行不同的数量?

以下是代码的一些重要部分的摘录。 构建图表:

graph = tf.Graph()
    with graph.as_default():
        x = tf.placeholder(tf.float32, shape=[batch_size, self.image_height,  self.image_width, 1], name="data")

        y_ = tf.placeholder(tf.float32, shape=[batch_size, num_labels], name="labels")

        # Layer 1
        W_conv1 = weight_variable([patch_size, patch_size, 1, depth], name="weight_1")
        b_conv1 = bias_variable([depth], name="bias_1")

        h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1, name="conv_1") + b_conv1, name="relu_1")
        h_pool1 = max_pool_2x2(h_conv1, name="pool_1")

        #Layer 2
        #W_conv2 = weight_variable([patch_size, patch_size, depth, depth*2])
        #b_conv2 = bias_variable([depth*2])

        #h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
        #h_pool2 = max_pool_2x2(h_conv2)

        # Densely Connected Layer
        W_fc1 = weight_variable([self.image_height/4 * self.image_width/2 * depth*2, depth], name="weight_2")
        b_fc1 = bias_variable([depth], name="bias_2")

        h_pool2_flat = tf.reshape(h_pool1, [-1, self.image_height/2 * self.image_width/2 * depth], name="reshape_1")
        h_fc1 = tf.nn.relu(tf.nn.xw_plus_b(h_pool2_flat, W_fc1, b_fc1), name="relu_2")

        keep_prob = tf.placeholder(tf.float32, name="keep_prob")
        h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob, name="drop_1")

        W_fc2 = weight_variable([depth, num_labels], name="dense_weight")
        b_fc2 = bias_variable([num_labels], name="dense_bias")

        logits = tf.nn.xw_plus_b(h_fc1_drop, W_fc2, b_fc2)
        tf.add_to_collection("logits", logits)
        y_conv = tf.nn.softmax(logits, name="softmax_1")
        tf.add_to_collection("y_conv", y_conv)


        with tf.name_scope("cross-entropy") as scope:
            cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y_conv, y_, name="cross_entropy_1"))
            ce_summ = tf.scalar_summary("cross entropy", cross_entropy, name="cross_entropy")

        optimizer = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy, name="min_adam_1")

        with tf.name_scope("prediction") as scope:
            correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
            accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
            accuracy_summary = tf.scalar_summary("accuracy", accuracy, name="accuracy_summary")

        merged = tf.merge_all_summaries()

加载并运行新数据

with tf.Session() as sess:
            new_saver = tf.train.import_meta_graph('./simple_model/one-layer-50.meta')
            new_saver.restore(sess, './simple_model/one-layer-50')
            logger.info("Model restored")
            image, _ = tf_nn.reformat(images, None, 3)

            x_image = tf.placeholder(tf.float32, shape=[image.shape[0], 28, 28, 1],
                                     name="data")
            keep_prob = tf.placeholder(tf.float32, name="keep_prob")

            feed_dict = {x_image: image, keep_prob: .01}
            y_ = tf.get_collection("y_")
            prediction = sess.run(y_, feed_dict=feed_dict)

2 个答案:

答案 0 :(得分:5)

您可以使用None而不是像这样的特定数字来定义占位符,使其在其中一个维度上具有灵活的大小:

x = tf.placeholder(tf.float32, shape=[None, self.image_height,  self.image_width, 1], name="data")

y_ = tf.placeholder(tf.float32, shape=[None, num_labels], name="labels")

编辑:有a section in the TensorFlow faq about this

答案 1 :(得分:2)

我的方法是将batch_size定义为tf.variable,然后提供运行会话时要使用的batchsize的值。这在过去对我来说很好,但我想Stryke的解决方案会更优雅。