通过传递学习,我在CNN模型中具有很高的准确性,但在火车中却具有较低的准确性

时间:2019-05-02 01:51:34

标签: python tensorflow machine-learning keras deep-learning

我有使用CNN的图像分类模型。我已经使用MobileNet完成了转移学习。在Mobile Net的末尾,我添加了4个层来学习图像的权重(而不是更新MobileNet的权重)。移动网络的权重保持不变。结果,该模型的准确度达到91%,并使用相同的训练集(train _generator)进行了评估。但目前我的准确率较低,约为41%。为什么会得出不同的结果?我使用了相同的训练集... model.fit_generator的准确性和model.evaluate_generator有什么区别吗?还是数据有问题?请帮忙...如何提高准确性?这是下面的完整代码。

from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
from tensorflow.keras.applications import MobileNet
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.mobilenet import preprocess_input
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
base_model = MobileNet(weights='imagenet', include_top=False)

x=base_model.output
x=GlobalAveragePooling2D()(x)
x=Dense(1024, activation='relu')(x)
x=Dense(1024, activation='relu')(x)
x=Dense(512, activation='relu')(x)
preds=Dense(7, activation='softmax')(x)

model=Model(inputs=base_model.input, outputs=preds)

for layer in model.layers[:-4]:
    layer.trainable=False
for layer in model.layers[-4:]:
    layer.trainable=True

train_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)

train_generator = train_datagen.flow_from_directory('/Users/LG/Desktop/finger',
                                                   target_size=(224, 224),
                                                   color_mode='rgb',
                                                   batch_size=32,
                                                   class_mode='categorical',
                                                   shuffle=True)

model.compile(optimizer='Adam', loss='categorical_crossentropy', metrics=['accuracy'])

step_size_train=train_generator.n//train_generator.batch_size
model.fit_generator(generator=train_generator,
                   steps_per_epoch=step_size_train,
                   epochs=10)

史诗1/10 17/17 [==============================]-53s 3s / step-损失:1.9354-acc:0.3026

史诗2/10 17/17 [==============================]-52s 3s / step-损失:1.1933-acc:0.5276

史诗3/10 17/17 [==============================]-52s 3s / step-损失:0.8936-acc:0.6787

史诗4/10 17/17 [==============================]-54s 3s / step-损失:0.6040-acc:0.7843

史诗5/10 17/17 [==============================]-53s 3s / step-损失:0.5367-acc:0.8080

史诗6/10 17/17 [==============================]-55s 3s / step-损失:0.2676-acc:0.9099

史诗7/10 17/17 [==============================]-52s 3s / step-损失:0.4531-acc:0.8387

史诗8/10 17/17 [==============================]-53s 3s / step-损失:0.3580-acc:0.8747

史诗9/10 17/17 [==============================]-55s 3s / step-损失:0.1963-acc:0.9301

Epoch 10/10 17/17 [==============================]-53s 3s / step-损失:0.2237-acc:0.9133

model.evaluate_generator(train_generator, steps=5)

[2.169835996627808,0.41875]

0 个答案:

没有答案