Tensorflow CNN的零精度?

时间:2019-04-06 13:28:30

标签: python-3.x tensorflow deep-learning conv-neural-network

我有25000张彩色图片100 * 100(* 3)的数据集,我正在尝试构建一个带有一个卷积层的简单神经网络。它显示了受疟疾感染或未被疟疾感染的细胞的图片,因此我的输出为2。但是对于每一批,我得到0%的准确性。我的批次大小为1,但我尝试使用其他大小,但我仍然获得0%的准确性。

我的CNN:

def simple_nn(X_training, Y_training, X_test, Y_test):
    input = 100*100*3
    h1 = 100
    batch_size = 1
    learning_rate = 0.000001
    dropout = 0.2

    X = tf.placeholder(tf.float32, [batch_size, 100, 100, 3], name="is_train")
    Y_ = tf.placeholder(tf.float32, [None, 2])

    #Layers
    conv1 = tf.layers.conv2d(X, filters=64, kernel_size=4,
                         strides=2, padding='SAME',
                         activation=tf.nn.relu, name="conv1")
    conv1 = tf.layers.batch_normalization(conv1)
    conv1 = tf.layers.max_pooling2d(conv1, 2, 2)

    conv2 = tf.layers.conv2d(conv1, filters=128, kernel_size=3,
                         strides=2, padding='SAME',
                         activation=tf.nn.relu, name="conv2")
    conv2 = tf.layers.dropout(conv2, rate=dropout)

    conv3 = tf.layers.conv2d(conv2, filters=256, kernel_size=3,
                     strides=2, padding='SAME',
                     activation=tf.nn.relu, name="conv3")
    conv3 = tf.layers.dropout(conv3, rate=dropout)

    conv4 = tf.layers.conv2d(conv3, filters=64, kernel_size=3,
                     strides=2, padding='SAME',
                     activation=tf.nn.relu, name="conv4")
    conv4 = tf.layers.max_pooling2d(conv4, 2, 2)

    conv5 = tf.layers.conv2d(conv4, filters=32, kernel_size=3,
                         strides=2, padding='SAME',
                         activation=tf.nn.relu, name="conv5")
    Y = tf.reshape(conv5, [batch_size,-1])
    logits = tf.layers.dense(Y, units=2, activation=tf.nn.relu)

    # loss function
    cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(labels=Y_, logits=logits)
    loss = tf.reduce_mean(tf.cast(cross_entropy, tf.float32))

    # % of correct answers found in batch
    is_correct = tf.equal(tf.argmax(Y,1), tf.argmax(Y_,1))
    accuracy = tf.reduce_mean(tf.cast(is_correct, tf.float32))


    optimizer = tf.train.GradientDescentOptimizer(learning_rate)
    train_step = optimizer.minimize(cross_entropy)

    init = tf.global_variables_initializer()

    sess = tf.Session()
    sess.run(init)

    for i in range(math.floor(len(X_training)/batch_size)):
        st = batch_size * i
        end = st + batch_size

        if end >= math.floor(len(X_training)) - batch_size:
            break
        batch_X, batch_Y = X_training[st:end], Y_training[st:end]
        train_data={X: batch_X, Y_: batch_Y}

        sess.run(train_step, feed_dict=train_data)

        #Get the accuracy and loss
        a, l = sess.run([accuracy, cross_entropy], feed_dict=train_data)
        print("acc : "+str(a)+" , loss : "+str(l))

我的输出:

acc : 0.0 , loss : [0.6931472]
acc : 0.0 , loss : [0.6931472]
acc : 0.0 , loss : [0.69436306]
acc : 0.0 , loss : [0.6931472]
acc : 0.0 , loss : [0.6931662]
acc : 0.0 , loss : [0.6931472]
acc : 0.0 , loss : [0.6925567]
acc : 0.0 , loss : [0.6931472]
acc : 0.0 , loss : [0.6931472]
acc : 0.0 , loss : [0.69259375]
acc : 0.0 , loss : [0.6912933]
acc : 0.0 , loss : [0.6957785]
acc : 0.0 , loss : [0.6931472]
acc : 0.0 , loss : [0.6990725]
acc : 0.0 , loss : [0.69037354]
acc : 0.0 , loss : [0.6931472]
acc : 0.0 , loss : [0.6931472]
acc : 0.0 , loss : [0.6931472]
acc : 0.0 , loss : [0.6931472]
acc : 0.0 , loss : [0.6991633]
acc : 0.0 , loss : [0.6931472]
acc : 0.0 , loss : [0.6931472]
acc : 0.0 , loss : [0.6931472]
acc : 0.0 , loss : [0.700589]
acc : 0.0 , loss : [0.6931472]

我在一个简单的非卷积层上获得了65%的收益(意思是acc=0.65),但是由于我改用conv,所以acc=0.0。首先,由于某种原因,我通过使用卷积层时在变量loss中返回了精度,但是现在我不这么认为,我认为损失函数有问题。 即使我将模型简化为一层,也发生了同样的事情,而我的loss仍然在0.69周围。

1 个答案:

答案 0 :(得分:1)

您应该最小化缩小的向量。更改此行

x = 0 newValues = [] for value in my_list: x = x + 1 newValues.append(myCoolFunction(value)) print(x)

对此:

train_step = optimizer.minimize(cross_entropy)

此外,准确性计算中不包括train_step = optimizer.minimize(loss)层。这样做:

logits

此外,您还将两次激活应用于is_correct = tf.equal(tf.argmax(logits,1), tf.argmax(Y_,1)) accuracy = tf.reduce_mean(tf.cast(is_correct, tf.float32)) 层。首先,您拥有logits,然后使用tf.nn.relu(与softmax一起使用)。不确定您是否故意这样做。