张量流控制流程

时间:2018-03-04 02:36:42

标签: python tensorflow

我正在创建用于图像分类的卷积神经网络。我是Tensorflow的新手,为了掌握它的控制流程,我把一个简单的" Hello World"在conv_net()中打印语句。但是,当我运行程序时,输出是

Hello World!
Epoch 0 completed out of 5 Loss: 0.959831058979
Epoch 1 completed out of 5 Loss: 1.15144479275
Epoch 2 completed out of 5 Loss: 1.15144479275
Epoch 3 completed out of 5 Loss: 1.35144472122
Epoch 4 completed out of 5 Loss: 1.15144479275 

这意味着conv_net()只执行一次,而不是NUM_EPOCHS次。为什么会这样?以下是我的程序片段,

def conv_net(x):
    print("Hello World!")

    with tf.variable_scope("ConvNet"):
        # First Layer
        w1 = tf.Variable(tf.truncated_normal([11, 11, 3, 96], stddev=0.03))
        b1 = tf.Variable(tf.truncated_normal([96]))
        conv2d_layer1 = tf.nn.conv2d(x, w1, [1, 4, 4, 1], padding='SAME')
        conv2d_layer1 += b1
        conv2d_layer1 = tf.nn.relu(conv2d_layer1)
        conv2d_layer1 = tf.nn.max_pool(conv2d_layer1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME')
        … 

    return y

def main():
    img_batch, lbl_batch = input_pipeline()
    prediction = conv_net(img_batch)
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=prediction, 
                                                                    labels=tf.one_hot(lbl_batch, NUM_CLASSES)))
    optimizer = tf.train.AdamOptimizer(LEARNING_RATE).minimize(cost)

    with tf.Session() as sess:  
        sess.run(tf.group(tf.global_variables_initializer(), tf.local_variables_initializer()))
        coord = tf.train.Coordinator()
        threads = tf.train.start_queue_runners(coord=coord)

        for epoch in range(NUM_EPOCHS):
            epoch_loss = 0
            for _ in range(int(train_size/BATCH_SIZE)):
                c, _ = sess.run([cost, optimizer])
                epoch_loss += c

            print('Epoch', epoch, 'completed out of', NUM_EPOCHS, 'Loss:', epoch_loss)

        coord.request_stop()
        coord.join(threads)

0 个答案:

没有答案