Tensorflow苗条和断言错误

时间:2017-01-29 10:31:34

标签: python tensorflow

我是tensorflow的新手,并且一直在试验苗条的实验。我试图将tensorflow教程中的MNIST教程翻译成纤细的语法。它一直工作正常,一组未经加强的图像输入到模型中。然后我在代码中添加了一个tf.train_batch线程,当我运行整个文件时它停止工作。给出错误

Traceback (most recent call last):
  File ".../slim.py", line 43, in <module>
    train_op = slim.learning.create_train_op(loss, optimiser)
  File "...\Python\Python35\lib\site-packages\tensorflow\contrib\slim\python\slim\learning.py", line 442, in create_train_op
    assert variables_to_train
AssertionError

但是,我可以有选择地重新运行create_train_op行,然后训练模型,虽然损失函数在这里没有减少,基本上它不起作用。这仍然允许我从tensorboard(附在下面)获得图形可视化,我在这里看不到任何错误。

我知道我做错了什么,但我不知道它在哪里。

import tensorflow as tf
import time
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow.contrib.slim as slim


def model(inputs, is_training=True):
    end_points = {}
    with slim.arg_scope([slim.conv2d, slim.fully_connected], activation_fn=tf.nn.relu,
                        weights_initializer=tf.truncated_normal_initializer(stddev=0.1)):
        net = slim.conv2d(inputs, 32, [5, 5], scope="conv1")
        end_points['conv1'] = net
        net = slim.max_pool2d(net, [2, 2], scope="pool1")
        end_points['pool1'] = net
        net = slim.conv2d(net, 64, [5, 5], scope="conv2")
        end_points['conv2'] = net
        net = slim.max_pool2d(net, [2, 2], scope="pool2")
        end_points['pool2'] = net
        net = slim.flatten(net, scope="flatten")
        net = slim.fully_connected(net, 1024, scope="fc1")
        end_points['fc1'] = net
        net = slim.dropout(net, keep_prob=0.75, is_training=is_training, scope="dropout")
        net = slim.fully_connected(net, 10, scope="final", activation_fn= None)
        end_points['final'] = net
    return net, end_points

mnist = input_data.read_data_sets("MNIST_data/", one_hot=False)
batch = mnist.train.next_batch(20000)

x_image = tf.reshape(batch[0], [-1,28,28,1])
label = tf.one_hot(batch[1], 10)

image, labels = tf.train.batch([x_image[0], label[0]], batch_size= 100)

with tf.Graph().as_default():
    tf.logging.set_verbosity(tf.logging.DEBUG)
    logits, _ = model(image)
    predictions = tf.nn.softmax(logits)
    loss = slim.losses.softmax_cross_entropy(predictions, labels)
    config = tf.ConfigProto()
    optimiser = tf.train.AdamOptimizer(1e-4)
    train_op = slim.learning.create_train_op(loss, optimiser)
    thisloss = slim.learning.train(train_op, "C:/temp/test2", number_of_steps=100, save_summaries_secs=30, session_config=config)

enter image description here

1 个答案:

答案 0 :(得分:0)

您需要在同一图表下创建所有操作,包括输入数据

with tf.Graph().as_default():
  mnist = input_data.read_data_sets("MNIST_data/", one_hot=False)
  batch = mnist.train.next_batch(20000)

  x_image = tf.reshape(batch[0], [-1,28,28,1])
  label = tf.one_hot(batch[1], 10)

  image, labels = tf.train.batch([x_image[0], label[0]], batch_size= 100)

  tf.logging.set_verbosity(tf.logging.DEBUG)
  logits, _ = model(image)
  predictions = tf.nn.softmax(logits)
  loss = slim.losses.softmax_cross_entropy(predictions, labels)
  config = tf.ConfigProto()
  optimiser = tf.train.AdamOptimizer(1e-4)
  train_op = slim.learning.create_train_op(loss, optimiser)
  thisloss = slim.learning.train(train_op, "C:/temp/test2", number_of_steps=100, save_summaries_secs=30, session_config=config)