Tensorflow错误:尺寸必须相等

时间:2019-03-29 05:50:43

标签: python-3.x tensorflow conv-neural-network

我有25000张彩色图片100 * 100(* 3)的数据集,我正在尝试构建一个带有一个卷积层的简单神经网络。它显示了受疟疾感染或未感染的细胞的图片,因此我的输出是2。 但是似乎我的尺寸不匹配,而且我不知道我的错误来自哪里。

我的神经网络:

def simple_nn(X_training, Y_training, X_test, Y_test):
    input = 100*100*3
    batch_size = 25

    X = tf.placeholder(tf.float32, [batch_size, 100, 100, 3])
    #Was:
    # W = tf.Variable(tf.zeros([input, 2]))
    # b = tf.Variable(tf.zeros([2]))
    #Now:
    W = tf.Variable(tf.truncated_normal([4, 4, 3, 3], stddev=0.1))
    B = tf.Variable(tf.ones([3])/10) # What should I put here ??

    init = tf.global_variables_initializer()

    # model
    #Was:
    # Y = tf.nn.softmax(tf.matmul(tf.reshape(X, [-1, input]), W) + b)
    #Now:
    stride = 1  # output is still 28x28
    Ycnv = tf.nn.conv2d(X, W, strides=[1, stride, stride, 1], padding='SAME')
    Y = tf.nn.relu(Ycnv + B)

    # placeholder for correct labels
    Y_ = tf.placeholder(tf.float32, [None, 2])


    # loss function
    cross_entropy = -tf.reduce_sum(Y_ * tf.log(Y))

    # % of correct answers found in batch
    is_correct = tf.equal(tf.argmax(Y,1), tf.argmax(Y_,1))
    accuracy = tf.reduce_mean(tf.cast(is_correct, tf.float32))

    learning_rate = 0.00001

    optimizer = tf.train.GradientDescentOptimizer(learning_rate)
    train_step = optimizer.minimize(cross_entropy)
    sess = tf.Session()
    sess.run(init)
    #Training here...

我的错误:

Traceback (most recent call last):
  File "neural_net.py", line 135, in <module>
    simple_nn(X_training, Y_training, X_test, Y_test)
  File "neural_net.py", line 69, in simple_nn
    cross_entropy = -tf.reduce_sum(Y_ * tf.log(Y))
...
ValueError: Dimensions must be equal, but are 2 and 3 for 'mul' (op: 'Mul') with input shapes: [?,2], [25,100,100,3].

我以前使用过一个简单的层,它正在工作。我改变了自己的体重和偏见,说实话,我不知道为什么要这样设置偏见,我遵循了一个教程(https://codelabs.developers.google.com/codelabs/cloud-tensorflow-mnist/#11),但没有解释。 我也将Y替换为conv2D。 而且我不知道要得到大小为2 * 1的向量时我的输出应该是什么。

1 个答案:

答案 0 :(得分:2)

您已将标签正确定义为

Y_ = tf.placeholder(tf.float32, [None, 2])

因此最后一维是2。但是,卷积步骤的输出并不直接适合将其与标签进行比较。我的意思是:如果您这样做

Ycnv = tf.nn.conv2d(X, W, strides=[1, stride, stride, 1], padding='SAME')
Y = tf.nn.relu(Ycnv + B)

此尺寸将为四个,如错误所示:

ValueError: Dimensions must be equal, but are 2 and 3 for 'mul' (op: 'Mul') with input shapes: [?,2], [25,100,100,3].

因此,不可能直接将卷积的输出与标签相乘(或运算)。我建议将卷积的输出展平(仅重整为一维),并将其传递到2个单位的完全连接的层(与您拥有的类一样多)。像这样:

Y = tf.reshape(Y, [1,-1])
logits = tf.layers.dense(Y, units= 2)

,您可以将其传递给损失者。

我还建议您将损失更改为更合适的版本。例如,tf.losses.sigmoid_cross_entropy

此外,使用卷积的方式很奇怪。为什么要在卷积中放入手工过滤器?此外,您还必须进行初始化并将其放入集合中。最后,我建议您删除以下所有代码:

    W = tf.Variable(tf.truncated_normal([4, 4, 3, 3], stddev=0.1))
    B = tf.Variable(tf.ones([3])/10) # What should I put here ??

    init = tf.global_variables_initializer()

    # model
    #Was:
    # Y = tf.nn.softmax(tf.matmul(tf.reshape(X, [-1, input]), W) + b)
    #Now:
    stride = 1  # output is still 28x28
    Ycnv = tf.nn.conv2d(X, W, strides=[1, stride, stride, 1], padding='SAME')
    Y = tf.nn.relu(Ycnv + B)

并替换为:

conv1 = tf.layers.conv2d(X, filters=64, kernel_size=3,
                         strides=1, padding='SAME',
                         activation=tf.nn.relu, name="conv1")

另外,init = tf.global_variable_initializer()应该位于图形构造的末尾,因为否则,将存在一些无法捕获的变量。

我最终的工作代码是:

def simple_nn():
    inp = 100*100*3
    batch_size = 2

    X = tf.placeholder(tf.float32, [batch_size, 100, 100, 3])
    Y_ = tf.placeholder(tf.float32, [None, 2])
    #Was:
    # W = tf.Variable(tf.zeros([input, 2]))
    # b = tf.Variable(tf.zeros([2]))
    #Now:



    # model
    #Was:
    # Y = tf.nn.softmax(tf.matmul(tf.reshape(X, [-1, input]), W) + b)
    #Now:
    stride = 1  # output is still 28x28

    conv1 = tf.layers.conv2d(X, filters=64, kernel_size=3,
                         strides=1, padding='SAME',
                         activation=tf.nn.relu, name="conv1")
    Y = tf.reshape(conv1, [1,-1])
    logits = tf.layers.dense(Y, units=2, activation=tf.nn.relu)
    # placeholder for correct labels



    # loss function
      cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(labels=Y_, logits=logits)
    loss = tf.reduce_mean(cross_entropy)

    # % of correct answers found in batch
    is_correct = tf.equal(tf.argmax(Y,1), tf.argmax(Y_,1))
    accuracy = tf.reduce_mean(tf.cast(is_correct, tf.float32))

    learning_rate = 0.00001

    optimizer = tf.train.GradientDescentOptimizer(learning_rate)
    train_step = optimizer.minimize(cross_entropy)

    init = tf.global_variables_initializer()

    with  tf.Session() as sess:
        sess.run(init)
            ...