Tensorflow:在循环内的循环中输入占位符失败

时间:2017-01-20 15:43:02

标签: python tensorflow

我正在尝试反复训练不同大小的隐藏层的神经网络,以确定应该有多少神经元。我写了一个网,在做一次通过时工作正常。代码是:

import tensorflow as tf
import nn

def train(layers, data, folder = 'run1'):
    input_layer_size, hidden_layer_size, num_labels = layers;
    X, y, X_val, y_val = data;

    X_placeholder = tf.placeholder(tf.float32, shape=(None, input_layer_size), name='X')
    y_placeholder = tf.placeholder(tf.uint8, shape=(None, num_labels), name='y')
    Theta1 = tf.Variable(nn.randInitializeWeights(input_layer_size, hidden_layer_size), name='Theta1')
    bias1 = tf.Variable(nn.randInitializeWeights(hidden_layer_size, 1), name='bias1')
    Theta2 = tf.Variable(nn.randInitializeWeights(hidden_layer_size, num_labels), name='Theta2')
    bias2 = tf.Variable(nn.randInitializeWeights(num_labels, 1), name='bias2')
    cost = nn.cost(X_placeholder, y_placeholder, Theta1, bias1, Theta2, bias2)
    optimize = tf.train.GradientDescentOptimizer(0.6).minimize(cost)

    accuracy, precision, recall, f1 = nn.evaluate(X_placeholder, y_placeholder, Theta1, bias1, Theta2, bias2)

    cost_summary = tf.summary.scalar('cost', cost);
    accuracy_summary = tf.summary.scalar('accuracy', accuracy);
    precision_summary = tf.summary.scalar('precision', precision);
    recall_summary = tf.summary.scalar('recall', recall);
    f1_summary = tf.summary.scalar('f1', f1);
    summaries = tf.summary.merge_all();

    sess = tf.Session();
    saver = tf.train.Saver()
    init = tf.global_variables_initializer()
    sess.run(init)

    writer = tf.summary.FileWriter('./tmp/logs/' + folder, sess.graph)

    NUM_STEPS = 20;

    for step in range(NUM_STEPS):
        sess.run(optimize, feed_dict={X_placeholder: X, y_placeholder: y});
        if (step > 0) and ((step + 1) % 10 == 0):
            summary = sess.run(summaries, feed_dict={X_placeholder: X_val, y_placeholder: y_val});
            # writer.add_summary(summary, step);
            print('Step', step + 1, 'of', NUM_STEPS);

    save_path = saver.save(sess, './tmp/model_' + folder + '.ckpt')
    print("Model saved in file: %s" % save_path)
    sess.close();

当我把这个调用放在循环中时,我只通过第一次迭代。在我第一次遇到这一行时,它似乎在第二次迭代中失败了:

summary = sess.run(summaries, feed_dict={X_placeholder: X_val, y_placeholder: y_val});

我收到错误:InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'X' with dtype float

我在喂食之前记录了XX_val,他们看起来就像他们在前一次跑步时一样。如果我评论第二个run部分它的效果很好,但我有点需要我的摘要......

我的外环看起来像这样:

import train
import loadData

input_layer_size  = 5513;
num_labels = 128;

data = loadData.load(input_layer_size, num_labels);

for hidden_layer_size in range(50, 500, 50):
    train.train([input_layer_size, hidden_layer_size, num_labels], data, 'run' + str(hidden_layer_size))

1 个答案:

答案 0 :(得分:0)

因为你在一个循环中调用了train函数,所以每次运行它都会创建一个新的占位符副本。第一次运行它很好,因为只有一个副本。它第二次运行时你有重复的占位符。解决方案是从运行培训的代码中分离出构建模型的代码。