使用MNIST数据集更改卷积和池化层的Tensorflow数量

时间:2017-06-29 14:51:55

标签: python tensorflow

我正在使用Windows 10专业版,python 3.6.2rc1,Visual Studio 2017和Tensorflow。我在以下链接的教程中使用Tensorflow示例:

https://www.tensorflow.org/tutorials/layers

我在展平最后一层(第三层)之前添加了另一层卷积和合并,以查看准确度是否发生变化。

我添加的代码如下:

## Input Tensor Shape: [batch_size, 7, 7, 64]
## Output Tensor Shape: [batch_size, 7, 7, 64]
conv3 = tf.layers.conv2d(
    inputs=pool2,
    filters=64,
    kernel_size=[3, 3],
    padding=1,
    activation=tf.nn.relu)

pool3 = tf.layers.max_pooling2d(inputs=conv3, pool_size=[2, 2], strides=1)
pool3_flat = tf.reshape(pool3, [-1, 7* 7 * 64])

我将填充更改为1并且步幅为1的原因是为了确保输出的大小与输入相同。但是在添加这个新图层后,我收到以下警告,并且没有显示任何结果,程序结束:

  
    

通过迁移,Estimator与Scikit Learn界面分离     单独的类SKCompat。参数x,y和batch_size仅为     在SKCompat类中,Estimator只接受input_fn。     转换示例:       est = Estimator(...) - > est = SKCompat(Estimator(...))     警告:tensorflow:从E:\ Apps \ DA2CNNTest \ TFHWDetection与更多图层\ TFClassification \ TFClassification \ TFClassification.py:179:使用batch_size调用BaseEstimator.fit(来自tensorflow.contrib.learn.python.learn.estimators.estimator)是已弃用,将在2016-12-01之后删除。     更新说明:     通过迁移,Estimator与Scikit Learn界面分离     单独的类SKCompat。参数x,y和batch_size仅为     在SKCompat类中,Estimator只接受input_fn。     转换示例:       est = Estimator(...) - > est = SKCompat(Estimator(...))     线程' MainThread' (0x5c8)已退出代码0(0x0)。     该程序' [13468] python.exe'已退出代码1(0x1)。

  

如果不添加此图层,则可以正常使用。为了解决这个问题,我按如下方式更改了conv3和pool3:

conv3 = tf.layers.conv2d(
    inputs=pool2,
    filters=64,
    kernel_size=[5, 5],
    padding="same",
    activation=tf.nn.relu)

# Input Tensor Shape: [batch_size, 7, 7, 64]
# Output Tensor Shape: [batch_size, 3, 3, 64]
pool3 = tf.layers.max_pooling2d(inputs=conv3, pool_size=[2, 2], strides=2)
pool3_flat = tf.reshape(pool3, [-1, 3* 3 * 64])

然后我在

遇到了不同的错误
nist_classifier.fit(
    x=train_data,
    y=train_labels,
    batch_size=100,
    steps=20000,
    monitors=[logging_hook])

如下:

  
    

tensorflow.python.framework.errors_impl.NotFoundError:检查点中找不到密钥conv2d_2 /偏差          [[节点:save / RestoreV2_5 = RestoreV2 [dtypes = [DT_FLOAT],_ device =" / job:localhost / replica:0 / task:0 / cpu:0"](_ arg_save / Const_0_0,save / RestoreV2_5 / tensor_names,save / RestoreV2_5 / shape_and_slices)]]

  

错误完全是指monitor = [logging_hook]。

我的整个代码如下,如您所见,我用padding = 1评论了前一个代码。

如果你可以指导我的错误,为什么会这样,我真的很感激。而且,我对第三层输入和输出的维度是否正确?

完整代码:

"""Convolutional Neural Network Estimator for MNIST, built with tf.layers."""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import numpy as np
import tensorflow as tf

from tensorflow.contrib import learn
from tensorflow.contrib.learn.python.learn.estimators import model_fn as model_fn_lib



tf.logging.set_verbosity(tf.logging.INFO)

def cnn_model_fn(features, labels, mode):
    """Model function for CNN."""

    input_layer = tf.reshape(features, [-1, 28, 28, 1])


# Input Tensor Shape: [batch_size, 28, 28, 1]
# Output Tensor Shape: [batch_size, 28, 28, 32]
conv1 = tf.layers.conv2d(
    inputs=input_layer,
    filters=32,       
    kernel_size=[5, 5],
    padding="same",
    activation=tf.nn.relu)


# Input Tensor Shape: [batch_size, 28, 28, 32]
# Output Tensor Shape: [batch_size, 14, 14, 32]
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)

# Convolutional Layer #2
# Input Tensor Shape: [batch_size, 14, 14, 32]
# Output Tensor Shape: [batch_size, 14, 14, 64]

conv2 = tf.layers.conv2d(
    inputs=pool1,
    filters=64,
    kernel_size=[5, 5],
    padding="same",
    activation=tf.nn.relu)

# Pooling Layer #2
# Input Tensor Shape: [batch_size, 14, 14, 64]
# Output Tensor Shape: [batch_size, 7, 7, 64]
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)


'''Adding a new layer of conv and pool'''
## Input Tensor Shape: [batch_size, 7, 7, 32]
## Output Tensor Shape: [batch_size, 7, 7, 64]
#conv3 = tf.layers.conv2d(
#    inputs=pool2,
#    filters=64,
#    kernel_size=[3, 3],
#    padding=1,
#    activation=tf.nn.relu)


## Input Tensor Shape: [batch_size, 7, 7, 64]
## Output Tensor Shape: [batch_size, 7, 7, 64]
#pool3 = tf.layers.max_pooling2d(inputs=conv3, pool_size=[2, 2], strides=1)
#pool3_flat = tf.reshape(pool3, [-1, 7* 7 * 64])


# Input Tensor Shape: [batch_size, 7, 7, 64]
# Output Tensor Shape: [batch_size, 7, 7, 64]
conv3 = tf.layers.conv2d(
    inputs=pool2,
    filters=64,
    kernel_size=[5, 5],
    padding="same",
    activation=tf.nn.relu)

# Input Tensor Shape: [batch_size, 7, 7, 64]
# Output Tensor Shape: [batch_size, 3, 3, 64]
pool3 = tf.layers.max_pooling2d(inputs=conv3, pool_size=[2, 2], strides=2)


'''End of manipulation'''


# Input Tensor Shape: [batch_size, 3, 3, 64]
# Output Tensor Shape: [batch_size, 3 * 3 * 64]
pool3_flat = tf.reshape(pool3, [-1, 3* 3 * 64])

# Input Tensor Shape: [batch_size, 3 * 3 * 64]
# Output Tensor Shape: [batch_size, 1024]
# dense(). Constructs a dense layer. Takes number of neurons and activation function as arguments.
dense = tf.layers.dense(inputs=pool3_flat, units=1024, activation=tf.nn.relu)

# Add dropout operation; 0.6 probability that element will be kept
dropout = tf.layers.dropout(
    inputs=dense, rate=0.4, training=mode == learn.ModeKeys.TRAIN)


logits = tf.layers.dense(inputs=dropout, units=10)

loss = None
train_op = None

# Calculate Loss (for both TRAIN and EVAL modes)
if mode != learn.ModeKeys.INFER:
    onehot_labels = tf.one_hot(indices=tf.cast(labels, tf.int32), depth=10)
    loss = tf.losses.softmax_cross_entropy(
        onehot_labels=onehot_labels, logits=logits)

# Configure the Training Op (for TRAIN mode)
if mode == learn.ModeKeys.TRAIN:
    train_op = tf.contrib.layers.optimize_loss(
    loss=loss,
    global_step=tf.contrib.framework.get_global_step(),
    learning_rate=0.001,
    optimizer="SGD")

# Generate Predictions
# The logits layer of our model returns our predictions as raw values in a [batch_size, 10]-dimensional tensor.
predictions = {
    "classes": tf.argmax(
        input=logits, axis=1),
    "probabilities": tf.nn.softmax(
        logits, name="softmax_tensor")
}

# Return a ModelFnOps object
return model_fn_lib.ModelFnOps(
    mode=mode, predictions=predictions, loss=loss, train_op=train_op)

def main(unused_argv):
# Load training and eval data
mnist = learn.datasets.load_dataset("mnist")
train_data = mnist.train.images  # Returns np.array
train_labels = np.asarray(mnist.train.labels, dtype=np.int32)
eval_data = mnist.test.images  # Returns np.array
eval_labels = np.asarray(mnist.test.labels, dtype=np.int32)

# Create the Estimator
mnist_classifier = learn.Estimator(
    model_fn=cnn_model_fn, model_dir="/tmp/mnist_convnet_model")

# Set up logging for predictions
# Log the values in the "Softmax" tensor with label "probabilities"
tensors_to_log = {"probabilities": "softmax_tensor"}
logging_hook = tf.train.LoggingTensorHook(
    tensors=tensors_to_log, every_n_iter=50)

# Train the model
mnist_classifier.fit(
    x=train_data,
    y=train_labels,
    batch_size=100,
    steps=20000,
    monitors=[logging_hook])

# Configure the accuracy metric for evaluation
#change metrics variable name
metricss = {
    "accuracy":
        learn.MetricSpec(
            metric_fn=tf.metrics.accuracy, prediction_key="classes"),
}

#Evaluate the model and print results
#for i in range(100)
eval_results = mnist_classifier.evaluate(
    x=eval_data[0:100], y=eval_labels[0:100], metrics=metricss)
print(eval_results)

if __name__ == "__main__":
    tf.app.run()

2 个答案:

答案 0 :(得分:0)

错误看起来像model_dir与当前图表更改冲突时可用的训练模型。 Estimator从保存的模型目录加载检查点,并继续从先前保存的模型进行训练。因此,无论何时在模型中进行更改,您都需要删除旧模型并再次开始培训。

答案 1 :(得分:0)

对此的一个简单修复是为模型定义自定义检查点目录,如下所示。

tf.train.generate_checkpoint_state_proto("/tmp/","/tmp/mnist_convnet_model")

这解决了MNIST示例的问题,并且还允许您访问可以控制检查点的位置。