训练准确性越来越高,但有效损失和准确性在每个时期都是相同的

时间:2020-11-05 14:18:22

标签: tensorflow machine-learning keras deep-learning training-data

我正在使用ResNet50进行迁移学习。 我的数据集是衣服(224x224x3),并且有49个类别(类)->每1个类别1000个训练数据,总共49000个。有效数据每1个类别200个,总计9800个。所有数据都是标准化的(包括有效数据)。因此,我认为数据不合理且标准化不是问题。详细代码在此行下方。 (不要担心变量名(例如nasnet),对不起)


from keras.applications import NASNetMobile, ResNet50, VGG19, VGG16
from keras.models import Model
from keras.layers import Dense, AveragePooling2D, Dropout, Input, Flatten, BatchNormalization
from keras.optimizers import Adam, SGD

from datagenerator import DataGenerator
import numpy as np
import os

# Set Parameters and Max File Count
params = {'dim': (224,224),
          'batch_size': 64,
          'n_classes': 49,
          'n_channels': 3,
          'shuffle': True}

file_cnt = 49000
max_cnt = 58800

# Datasets

train = np.arange(file_cnt)
np.random.shuffle(train)
test = np.arange(file_cnt,max_cnt)
np.random.shuffle(test)
dataset = np.append(train,test)

# Generators
training_generator = DataGenerator(train, **params)
validation_generator = DataGenerator(test, **params)

# get resnet layers and weights, add FCN
nasnet_model = ResNet50(weights='imagenet', include_top=False,input_tensor=Input(shape=(224, 224, 3)))    

nasnet_len = len(nasnet_model.layers)

x = nasnet_model.output
x = AveragePooling2D()(x)
x = Flatten(name="flatten")(x)
x = Dense(512, activation="relu")(x)
x = BatchNormalization()(x)
x = Dropout(0.5)(x)
x = Dense(512, activation="relu")(x)
x = BatchNormalization()(x)
x = Dropout(0.5)(x)
predictions = Dense(49, activation="softmax")(x)

model = Model(inputs=nasnet_model.input, outputs=predictions)

layer_num = len(model.layers)
for layer in model.layers[0:nasnet_len]:
    layer.trainable = False

for layer in model.layers[nasnet_len:]:
    layer.trainable = True

model.summary()

# training
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer=Adam(lr=0.0001), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(generator=training_generator,
                    validation_data=validation_generator, epochs = 20)

# saving
model_json = model.to_json()
with open("model.json", "w") as json_file : 
    json_file.write(model_json)    
model.save_weights("model.h5")

ImageGenerator



from keras.utils import to_categorical, Sequence

import numpy as np
import cv2
import data

class DataGenerator(Sequence): # Sequence 에 대해서 공부하
    'Generates data for Keras'
    def __init__(self, list_IDs, batch_size=64, dim=(224,224), n_channels=3,
                 n_classes=49, shuffle=True):
        'Initialization'
        self.dim = dim
        self.batch_size = batch_size
        self.list_IDs = list_IDs
        self.n_channels = n_channels
        self.n_classes = n_classes
        self.shuffle = shuffle
        self.on_epoch_end()

    def __len__(self):
        'Denotes the number of batches per epoch'
        return int(np.floor(len(self.list_IDs) / self.batch_size))

    def __getitem__(self, index):
        'Generate one batch of data'
        # Generate indexes of the batch
        indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]

        # Find list of IDs
        list_IDs_temp = [self.list_IDs[k] for k in indexes]

        # Generate data
        X, y = self.__data_generation(list_IDs_temp)

        return X, y

    def on_epoch_end(self):
        'Updates indexes after each epoch'
        self.indexes = np.arange(len(self.list_IDs))
        if self.shuffle == True:
            np.random.shuffle(self.indexes)

    def __data_generation(self, list_IDs_temp):
        'Generates data containing batch_size samples' # X : (n_samples, *dim, n_channels)
        # Initialization
        X = np.empty((self.batch_size, *self.dim, self.n_channels))
        y = np.empty((self.batch_size), dtype=int)

        # Generate data
        for i, ID in enumerate(list_IDs_temp):
            # Store sample
            X[i,] = np.load('desktop/standardized_data/input' + str(int(ID)) + '.npy')
            # Store class
            y[i] = np.load('desktop/standardized_data/output' + str(int(ID)) + '.npy')
            #tmp = self.labels[int(ID)] 
            
        return X, to_categorical(y, num_classes=self.n_classes)

但是有效的损失和准确性是这样的。


Epoch 6/20
765/765 [==============================] - 812s 1s/step - loss: 0.8220 - accuracy: 0.7020 - val_loss: 6.2641 - val_accuracy: 0.0345
Epoch 7/20
765/765 [==============================] - 836s 1s/step - loss: 0.7322 - accuracy: 0.7254 - val_loss: 6.8545 - val_accuracy: 0.0394
Epoch 8/20
765/765 [==============================] - 822s 1s/step - loss: 0.6664 - accuracy: 0.7432 - val_loss: 6.6525 - val_accuracy: 0.0362
Epoch 9/20
765/765 [==============================] - 799s 1s/step - loss: 0.6098 - accuracy: 0.7597 - val_loss: 6.1669 - val_accuracy: 0.0346
Epoch 10/20
764/765 [============================>.] - ETA: 0s - loss: 0.5719 - accuracy: 0.7711  

为什么这会发生在我身上... TT请帮助我找到原因。 (缺少第1〜5个时期,但有效损失和准确性相同,并且每次训练损失和准确性都在提高。)

  • 现在我正尝试用一些训练数据进行测试,但是结果是相同的...也就是说,我使用训练数据进行了训练,并使用训练数据进行了测试,但是准确性差异是相同的。

0 个答案:

没有答案