出于不同的多分类目的,我正在尝试对MobileNet进行培训,
train_datagen = ImageDataGenerator(
preprocessing_function = preprocess_input
training_generator = train_datagen.flow_from_directory(
directory = train_data_dir,
target_size=(parameters["img_width"], parameters["img_height"]),
batch_size = parameters["batch_size"],
class_mode= "categorical",
subset = "training",
color_mode = "rgb",
seed = 42)
# Define the Model
base_model = MobileNet(weights='imagenet',
include_top=False, input_shape = (128, 128, 3)) #imports the mobilenet model and discards the last 1000 neuron layer.
# Let only the last n layers as trainable
for layer in base_model.layers:
layer.trainable = False
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(800,activation='relu')(x) #we add dense layers so that the model can learn more complex functions and classify for better results.
x = Dense(600,activation='relu')(x) #dense layer 2
x = Dropout(0.8)(x)
x = Dense(256,activation='relu')(x) #dense layer 3
x = Dropout(0.2)(x)
preds = Dense(N_classes, activation='softmax')(x) #final layer with softmax activation
model= Model(inputs = base_model.input, outputs = preds)
model.compile(optimizer = "Adam", loss='categorical_crossentropy', metrics=['accuracy'])
执行培训设置为验证数据集,将培训设置为:
history = model.fit_generator(
training_generator,
steps_per_epoch= training_generator.n // parameters["batch_size"],
epochs = parameters["epochs"]
,
##### VALIDATION SET = TRAINING
validation_data = training_generator,
validation_steps = training_generator.n // parameters["batch_size"],
callbacks=[
EarlyStopping(monitor = "acc", patience = 8, restore_best_weights=False),
ReduceLROnPlateau(patience = 3)]
)
但是,在训练期间,即使在训练结果与验证准确性之间,我的确得到了准确性上的显着差异,即使它们是相同数据集也是如此。可能是什么原因?
答案 0 :(得分:0)
训练神经网络涉及训练数据库中数据的随机分布。因此,结果不可重现。如果您在准确性上有明显差异,可以尝试:
LE:培训期间,您在准确性上是否有显着差异并不重要。训练是一个迭代的优化过程,可最大程度地减少均方误差目标函数。实现目标需要一段时间。
答案 1 :(得分:0)
我不知道确切原因,但我重复了您的问题。发生问题是因为您使用的是SAME生成器,该生成器运行用于训练,然后再次运行以进行验证。如果您创建一个SEPERATE生成器以进行验证,并使用与输入相同的训练数据,则一旦您运行了足够的时间使训练精度达到90%的范围,您就会看到验证精度趋于稳定并趋于训练精度 Train-Valid Acc vs Epochs