Tensorflow导入元图占位符未被馈送

时间:2017-06-05 18:41:36

标签: python machine-learning tensorflow artificial-intelligence

我正在Tensorflow的一个项目上工作。我已经建立并训练了CNN,现在我试图将其加载到另一个文件中进行预测。出于某种原因,我不断收到错误"您必须为占位符张量提供一个值' y_pred'使用dtype float和shape [10]"

构建图形的文件具有用于预测的变量y_pred:

y_pred = tf.nn.softmax(layer_fc2)

我尝试加载模型的文件如下:

# Create Session
sess = tf.Session()
# Load model
saver = tf.train.import_meta_graph('Model.meta')
saver.restore(sess, tf.train.latest_checkpoint('./'))
sess.run(tf.global_variables_initializer())
graph = tf.get_default_graph()
x_batch = mnist.test.next_batch(1)

x_batch = x_batch[0].reshape(1, 784) 
x = graph.get_tensor_by_name("x:0")
y_pred = graph.get_tensor_by_name("y_pred:0")


classification = sess.run(y_pred, feed_dict={x:x_batch})
print(classification)

我得到的确切错误是:

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'y_pred' with dtype float and shape [10]
 [[Node: y_pred = Placeholder[dtype=DT_FLOAT, shape=[10], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

我想知道在导出之前我是否可能没有正确设置值? 有谁知道为什么这不起作用?

编辑。包括型号代码:

# Network Design
# First Layer
layer_conv1, weights_conv1 = new_conv_layer(input=x_image, num_input_channels=num_channels, filter_size=filter_size1, num_filters=num_filters1, use_pooling=True)
# Second Layer
layer_conv2, weights_conv2 = new_conv_layer(input=layer_conv1, num_input_channels=num_filters1, filter_size=filter_size2, num_filters=num_filters2, use_pooling=True)
# Third Layer
layer_conv3, weights_conv3 = new_conv_layer(input=layer_conv2, num_input_channels=num_filters2, filter_size=filter_size3, num_filters=num_filters3, use_pooling=True)
# Flatten Layer
layer_flat, num_features = flatten_layer(layer_conv3)
# First Fully Connected Layer
layer_fc1 = new_fc_layer(input=layer_flat, num_inputs=num_features, num_outputs=fc_size, use_relu=True)
# Second Fully Connected Layer
layer_fc2 = new_fc_layer(input=layer_fc1, num_inputs=fc_size, num_outputs=num_classes, use_relu=False)

# softmaxResult = tf.placeholder(tf.float32, shape=[10], name='softmaxResult')
# Get class probabilities
y_pred = tf.nn.softmax(layer_fc2)
y_pred = tf.identity(y_pred, name="y_pred")
# session.run(y_pred, feed_dict={softmaxResult: y_pred})
# Predicted Class
y_pred_cls = tf.argmax(y_pred, dimension=1)
# softmaxResult.assign(y_pred_cls)

# Feed y_pred
# session.run(softmaxResult, feedDict={softmaxResult: softmaxResult})

# Define Cost Function
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2, labels=y_true)
cost = tf.reduce_mean(cross_entropy)

# Optimize Network
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(cost)
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

# Run Session
session.run(tf.global_variables_initializer())

def print_progress(epoch, feed_dict_train, feed_dict_validate, val_loss):
    #Calculate accuracy on training set
    acc = session.run(accuracy, feed_dict=feed_dict_train)
    val_acc = session.run(accuracy, feed_dict=feed_dict_validate)
    msg = "Epoch {0} --- Training Accuracy: {1:>6.1%}, Validation Accuracy: {2:>6.1%}, Validation Loss: {3:.3f}"
    print(msg.format(epoch + 1, acc, val_acc, val_loss))

total_iterations = 0

#Optimization Function
def optimize(num_iterations):
    # Updates global rather than local value
    global total_iterations

    best_val_loss = float("inf")

    for i in range(total_iterations, total_iterations + num_iterations):
        # Get training data batch
        x_batch, y_batch = mnist.train.next_batch(batch_size)
        # Get a validation batch
        x_validate, y_validate = mnist.train.next_batch(batch_size)

        # Shrink to single dimension
        x_batch = x_batch.reshape(batch_size, img_size_flat)
        x_validate = x_validate.reshape(batch_size, img_size_flat)

        # Training feed
        feed_dict_train = {x: x_batch, y_true: y_batch}
        feed_dict_validate = {x: x_validate, y_true: y_validate}


        # Run the optimizer
        session.run(optimizer, feed_dict=feed_dict_train)

        # Print status at end of each epoch (defined as full pass through training dataset).
        if i % int(5000/batch_size) == 0: 
            val_loss = session.run(cost, feed_dict=feed_dict_validate)
            epoch = int(i / int(5000/batch_size))

            print_progress(epoch, feed_dict_train, feed_dict_validate, val_loss)

        total_iterations += num_iterations

optimize(num_iterations=3000)

# Save the final model
saver = tf.train.Saver()
saved_path = saver.save(session, os.path.join(os.getcwd(),'MNIST Model'))
print("Model saved in: ", saved_path)

# Run on test image
image = mnist.test.next_batch(1)
feedin = image[0].reshape(1, 784)
inputStuff = {x:feedin}

classification = session.run(y_pred, feed_dict=inputStuff)
print(classification)

1 个答案:

答案 0 :(得分:0)

谢谢@VS_FF

您需要在输入密钥时找到'x:0'。