使用Keras和Tensorflow的CNN实现

时间:2019-02-08 03:44:44

标签: tensorflow keras conv-neural-network

我已经使用 Keras 创建了CNN模型,并且正在 MNIST 数据集上对其进行训练。我得到了大约98%的合理准确性,这是我所期望的:

model = Sequential()
model.add(Conv2D(64, 5, activation="relu", input_shape=(28, 28, 1)))
model.add(MaxPool2D())
model.add(Conv2D(64, 5, activation="relu"))
model.add(MaxPool2D())
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer='adam', 
    loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(data.x_train, data.y_train, 
    batch_size=256, validation_data=(data.x_test, data.y_test))

现在,我想构建相同的模型,但是使用香草 Tensorflow ,这是我的操作方式:

X = tf.placeholder(shape=[None, 784], dtype=tf.float32, name="X")
Y = tf.placeholder(shape=[None, 10], dtype=tf.float32, name="Y")

net = tf.reshape(X, [-1, 28, 28, 1])
net = tf.layers.conv2d(
  net, filters=64, kernel_size=5, padding="valid", activation=tf.nn.relu)
net = tf.layers.max_pooling2d(net, pool_size=2, strides=2)
net = tf.layers.conv2d(
  net, filters=64, kernel_size=5, padding="valid", activation=tf.nn.relu)
net = tf.layers.max_pooling2d(net, pool_size=2, strides=2)
net = tf.contrib.layers.flatten(net)
net = tf.layers.dense(net, name="dense1", units=256, activation=tf.nn.relu)
model = tf.layers.dense(net, name="output", units=10)

这是我训练/测试的方式:

loss = tf.nn.softmax_cross_entropy_with_logits_v2(labels=Y, logits=model)
opt = tf.train.AdamOptimizer().minimize(loss)
accuracy = tf.cast(tf.equal(tf.argmax(model, 1), tf.argmax(Y, 1)), tf.float32)

with tf.Session() as sess:
    tf.global_variables_initializer().run()
    for batch in range(data.get_number_of_train_batches(batch_size)):
        x, y = data.get_next_train_batch(batch_size)
        sess.run([loss, opt], feed_dict={X: x, Y: y})

    for batch in range(data.get_number_of_test_batches(batch_size)):
        x, y = data.get_next_test_batch(batch_size)
        sess.run(accuracy, feed_dict={X: x, Y: y})

但是模型的最终精度下降到约80%。我使用 Keras Tensorflow 实施该模型之间的主要区别是什么?为什么精度变化如此之大?

2 个答案:

答案 0 :(得分:3)

我在您的代码中看不到任何错误。请注意,由于存在Dense层,因此当前的模型已针对此类简单问题进行了大量参数设置,其中引入了超过260k可训练参数:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_3 (Conv2D)            (None, 24, 24, 64)        1664      
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 12, 12, 64)        0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 8, 8, 64)          102464    
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 4, 4, 64)          0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 1024)              0         
_________________________________________________________________
dense_2 (Dense)              (None, 256)               262400    
_________________________________________________________________
dense_3 (Dense)              (None, 10)                2570      
=================================================================
Total params: 369,098
Trainable params: 369,098
Non-trainable params: 0
_________________________________________________________________

下面,我将使用以下代码运行您的代码:

  • 进行少量修改以使代码与keras.datasets中的MNIST数据集一起工作
  • 简化模型:基本上,我删除了256个节点的Dense层,大大减少了可训练参数的数量,并引入了一些用于规范化的参数。

有了这些更改,两个模型在第一个时期后都达到了90%以上的验证设置精度。 因此,您遇到的问题似乎与不良的优化问题有关,该问题导致了高度可变的结果,而不是代码中的错误

# Import the datasets
import numpy as np
from keras.datasets import mnist
from keras.utils import to_categorical

(x_train, y_train), (x_test, y_test) = mnist.load_data()

# Add batch dimension
x_train = np.expand_dims(x_train, axis=-1)
x_test = np.expand_dims(x_test, axis=-1)

# One-hot encode the labels
y_train = to_categorical(y_train, num_classes=None)
y_test = to_categorical(y_test, num_classes=None)

batch_size = 64

# Fit model using Keras
import keras
import numpy as np
from keras.layers import Conv2D, MaxPool2D, Flatten, Dense, Dropout
from keras.models import Sequential

model = Sequential()
model.add(Conv2D(32, 5, activation="relu", input_shape=(28, 28, 1)))
model.add(MaxPool2D())
model.add(Conv2D(32, 5, activation="relu"))
model.add(MaxPool2D())
model.add(Flatten())
model.add(Dense(10, activation='softmax'))
model.compile(optimizer='adam', 
    loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, 
    batch_size=32, validation_data=(x_test, y_test), epochs=1)

结果:

Train on 60000 samples, validate on 10000 samples
Epoch 1/1
60000/60000 [==============================] - 35s 583us/step - loss: 1.5217 - acc: 0.8736 - val_loss: 0.0850 - val_acc: 0.9742

请注意,现在可训练参数的数量只是模型中数量的一小部分:

model.summary()
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_3 (Conv2D)            (None, 24, 24, 32)        832       
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 12, 12, 32)        0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 8, 8, 32)          25632     
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 4, 4, 32)          0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 512)               0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 10)                5130      
=================================================================
Total params: 31,594
Trainable params: 31,594
Non-trainable params: 0

现在,对TensorFlow做同样的事情:

# Fit model using TensorFlow
import tensorflow as tf

X = tf.placeholder(shape=[None, 28, 28, 1], dtype=tf.float32, name="X")
Y = tf.placeholder(shape=[None, 10], dtype=tf.float32, name="Y")

net = tf.layers.conv2d(
  X, filters=32, kernel_size=5, padding="valid", activation=tf.nn.relu)
net = tf.layers.max_pooling2d(net, pool_size=2, strides=2)
net = tf.layers.conv2d(
  net, filters=32, kernel_size=5, padding="valid", activation=tf.nn.relu)
net = tf.layers.max_pooling2d(net, pool_size=2, strides=2)
net = tf.contrib.layers.flatten(net)
net = tf.layers.dropout(net, rate=0.25)
model = tf.layers.dense(net, name="output", units=10)

loss = tf.nn.softmax_cross_entropy_with_logits_v2(labels=Y, logits=model)
opt = tf.train.AdamOptimizer().minimize(loss)
accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(model, 1), tf.argmax(Y, 1)), tf.float32))

with tf.Session() as sess:
    tf.global_variables_initializer().run()
    L = []
    l_ = 0
    for i in range(x_train.shape[0] // batch_size):
        x, y = x_train[i*batch_size:(i+1)*batch_size],\
            y_train[i*batch_size:(i+1)*batch_size]
        l, _ = sess.run([loss, opt], feed_dict={X: x, Y: y})
        l_ += np.mean(l)
    L.append(l_ / (x_train.shape[0] // batch_size))
    print('Training loss: {:.3f}'.format(L[-1]))

    acc = []
    for j in range(x_test.shape[0] // batch_size):
        x, y = x_test[j*batch_size:(j+1)*batch_size],\
            y_test[j*batch_size:(j+1)*batch_size]
        acc.append(sess.run(accuracy, feed_dict={X: x, Y: y}))
    print('Test set accuracy: {:.3f}'.format(np.mean(acc)))

结果:

Training loss: 0.519
Test set accuracy: 0.968

答案 1 :(得分:0)

可能会改进您的模型。

我在不同的问题上使用了CNN网络,并且总是通过正则化技术获得了良好的效果改进,而最好的方法是退出。

我建议在Dense层上使用Dropout,以防在卷积层上使用较低的概率。

输入数据上的数据扩充也非常重要,但适用性取决于问题领域。

P.s:在一种情况下,我不得不使用动量将优化从Adam更改为SGD。因此,进行优化很有意义。另外,当您的网络处于饥饿状态并且不能提高效率时,可以考虑使用渐变裁剪,这可能是一个数字问题。