使用Functional API重写顺序模型

时间:2017-12-10 02:11:49

标签: machine-learning neural-network computer-vision keras conv-neural-network

我正在尝试使用Functional API重写Network In Network CNN的Sequential模型。我将它与CIFAR-10数据集一起使用。 Sequential模型训练没有问题,但功能API模型卡住了。重写模型时我可能错过了一些东西。

这是一个可重复的例子:

依赖关系:

from keras.models import Model, Input, Sequential
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D, Dropout, Activation
from keras.utils import to_categorical
from keras.losses import categorical_crossentropy
from keras.optimizers import Adam
from keras.datasets import cifar10

加载数据集:

(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train = x_train / 255.
x_test = x_test / 255.
y_train = to_categorical(y_train, num_classes=10)
y_test = to_categorical(y_test, num_classes=10)
input_shape = x_train[0,:,:,:].shape

这是工作顺序模型:

model = Sequential()

#mlpconv block1
model.add(Conv2D(32, (5, 5), activation='relu',padding='valid',input_shape=input_shape))
model.add(Conv2D(32, (1, 1), activation='relu'))
model.add(Conv2D(32, (1, 1), activation='relu'))
model.add(MaxPooling2D((2,2)))
model.add(Dropout(0.5))

#mlpconv block2
model.add(Conv2D(64, (3, 3), activation='relu',padding='valid'))
model.add(Conv2D(64, (1, 1), activation='relu'))
model.add(Conv2D(64, (1, 1), activation='relu'))
model.add(MaxPooling2D((2,2)))
model.add(Dropout(0.5))

#mlpconv block3
model.add(Conv2D(128, (3, 3), activation='relu',padding='valid'))
model.add(Conv2D(32, (1, 1), activation='relu'))
model.add(Conv2D(10, (1, 1), activation='relu'))
model.add(GlobalAveragePooling2D())

model.add(Activation('softmax'))

编译和训练:

model.compile(loss=categorical_crossentropy, optimizer=Adam(), metrics=['acc']) 

_ = model.fit(x=x_train, y=y_train, batch_size=32, 
          epochs=200, verbose=1,validation_split=0.2)

在三个时代,模型的验证准确度接近50%。

这是使用Functional API重写的相同模型:

model_input = Input(shape=input_shape)

#mlpconv block1
x = Conv2D(32, (5, 5), activation='relu',padding='valid')(model_input)
x = Conv2D(32, (1, 1), activation='relu')(x)
x = Conv2D(32, (1, 1), activation='relu')(x)
x = MaxPooling2D((2,2))(x)
x = Dropout(0.5)(x)

#mlpconv block2
x = Conv2D(64, (3, 3), activation='relu',padding='valid')(x)
x = Conv2D(64, (1, 1), activation='relu')(x)
x = Conv2D(64, (1, 1), activation='relu')(x)
x = MaxPooling2D((2,2))(x)
x = Dropout(0.5)(x)

#mlpconv block3
x = Conv2D(128, (3, 3), activation='relu',padding='valid')(x)
x = Conv2D(32, (1, 1), activation='relu')(x)
x = Conv2D(10, (1, 1), activation='relu')(x)

x = GlobalAveragePooling2D()(x)
x = Activation(activation='softmax')(x)

model = Model(model_input, x, name='nin_cnn')

然后使用与Sequential模型相同的参数编译该模型。经过训练后,训练准确度会停留在0.10,这意味着模型不会变得更好并随机选择10个类中的一个。

重写模型时我错过了什么?调用model.summary()时,模型看起来完全相同,但功能API模型中的显式Input图层除外。

1 个答案:

答案 0 :(得分:1)

删除最终转化层中的activation可解决问题:

x = Conv2D(10, (1, 1))(x)

仍然不确定为什么Sequential模型在该层激活时工作正常。