Keras cNN传输模型:减小最终模型的大小?

时间:2018-07-30 18:10:03

标签: python keras conv-neural-network

我正在与多个cNN一起在移动设备上运行。如果我从头开始创建这些cNN(黑色n白色,256x256),我将能够生成大约10mb的二进制分类模型,该尺寸非常适合在模型设备上使用。

但是,使用像VG16 Imagenet这样的转移学习可以使我的分类精度更高。但是,模型尺寸更接近100mb!这是从头开始的10倍。我必须从头开始拍摄那些灰度图像,并将它们转换为3通道,这占了某些尺寸。而且,冻结较少的层(例如10层而不是14层)会减小大小,但会非常小。

关于我仍然可以利用Keras转移学习并产生一个更小的模型的任何建议(例如一种采用B&W的转移学习模型的方法)?

这是我的转帐基准模型:

def baseline_model_func():
vgg_model = VGG16(weights='imagenet', include_top=False, input_shape=(input_dim, input_dim, 3))

for layer in vgg_model.layers[:10]:
    layer.trainable = False

x = vgg_model.output
x = Flatten()(x) # use global pooling
x = Dense(1024, activation="relu")(x)
x = Dropout(0.5)(x)
x = Dense(1024, activation="relu")(x)
predictions = Dense(1, activation="sigmoid")(x)
model_final = Model(inputs = vgg_model.input, outputs = predictions)
model_final.compile(loss = "binary_crossentropy", optimizer = optimizers.SGD(lr=0.0001, momentum=0.9), 
                    metrics=["accuracy"])
return model_final

`

这是我从头开始的基准模型的cNN:

def baseline_model_func():
model = Sequential()
model.add(Conv2D(filters=16, kernel_size=2, padding='same', activation='relu', 
                        input_shape=(input_dim, input_dim, channel_numbers)))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=64, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=128, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=256, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=512, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.3))
model.add(Flatten())
model.add(Dense(250, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(500, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss = "binary_crossentropy", optimizer = "adam", metrics=["accuracy"])
return model

```

0 个答案:

没有答案