tensorflow.python.framework.errors_impl.InvalidArgumentError:不兼容的形状:[10]与[10000]

时间:2017-09-17 14:41:36

标签: python numpy tensorflow deep-learning mnist

我是神经网络的新手并且学习使用来自here的张量流,但是当我运行代码时它会出错:

tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [10] vs. [10000]
 [[Node: Equal = Equal[T=DT_INT64, _device="/job:localhost/replica:0/task:0/cpu:0"](ArgMax, _arg_Placeholder_2_0_2)]]

我的代码如下:

import tensorflow as tf
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data

data = input_data.read_data_sets("data/MNIST/", one_hot=True)
data.test.cls = np.array([label.argmax() for label in data.test.labels])
img_size = 28
img_size_flat = img_size * img_size
img_shape = (img_size, img_size)
num_classes = 10

x = tf.placeholder(tf.float32, [None, img_size_flat])
y_true = tf.placeholder(tf.float32, [None, num_classes])
y_true_cls = tf.placeholder(tf.int64, [None])

weights = tf.Variable(tf.zeros([img_size_flat, num_classes]))
biases = tf.Variable(tf.zeros([num_classes]))

logits = tf.matmul(x, weights) + biases
y_pred = tf.nn.softmax(logits)
y_pred_cls = tf.argmax(y_pred)

cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_true)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.5).minimize(cost)

correct_prediction = tf.equal(y_pred_cls, y_true_cls)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

session = tf.Session()
session.run(tf.global_variables_initializer())
batch_size = 100

feed_dict_test = {x: data.test.images,
                  y_true: data.test.labels,
                  y_true_cls: data.test.cls}

def print_accuracy():
    acc = session.run(accuracy, feed_dict=feed_dict_test)
    print("Accuracy on test-set: {0:.1%}".format(acc))

print_accuracy()

有人可以解释为什么我收到此错误以及如何解决这个问题?

1 个答案:

答案 0 :(得分:0)

我找到了解决方案,问题是我没有通过y_pred_cls的维度,所以它的形状是(10,)但它需要是(?,10)所以我更新了{{ 1}}到y_pred_cls = tf.argmax(y_pred)