使用VGG16,2.0的“损失”不会减少,是否带有预备数据?

时间:2020-04-27 19:31:58

标签: python tensorflow keras kaggle

我正在尝试创建面部表情识别模型。 我的数据集来自“笑脸识别”(或诸如此类)。 设置在* .csv文件情感+像素(以字符串形式,例如“ 3,pixel pixel pixel ...”)中,我通过这种方法对其进行预处理:

def resizeImages(x):
    x = x.astype('float32')
    x = x / 255.0
    return x

def loadData():
    print("Data loading START")
    rawData = pd.read_csv("../data/data.csv")
    pixels = rawData['pixels'].tolist()
    images = []
    for each in pixels:
        image = [int(pixel) for pixel in each.split()]
        image = np.asarray(image).reshape(width, height)
        images.append(image.astype('float32'))
    images = np.asarray(images)
    images = np.expand_dims(images, -1)
    images = np.repeat(images, 3, axis=3)
    emotions = pd.get_dummies(rawData['emotion'])
    print(emotions)
    images = resizeImages(images)
    print("Data loading DONE")
    return images, emotions

我的模型是这样制作的:

#Constants
width, height, depth = 48, 48, 3
numOfClasses = 7
epochs = 10
batch_size = 50

#Loading Data
from Model.DataProcess import loadData
pixels, emotions = loadData()

#Spliting Data
from sklearn.model_selection import train_test_split
xtrain, xtest, ytrain, ytest  = train_test_split(pixels, emotions, test_size = 0.2)

#Loading VGG16 Model
from keras.applications.vgg16 import VGG16
vgg16model = VGG16(include_top=False, weights='imagenet',
                 input_shape=(width, height, depth), pooling='avg')

#Frezzing layers at VGG16 model - not trainable
for layer in vgg16model.layers:
    layer.trainable = False

#Creating final classifier
from keras.models import Sequential
from keras.layers import Dropout, Dense

myModel = Sequential([
    vgg16model,
    Dense(256, input_shape=(512,), activation="relu"),
    Dense(256, input_shape=(256,), activation="relu"),
    Dropout(0.25),
    Dense(128, input_shape=(256,)),
    Dense(output_dim=numOfClasses, activation="softmax")
])

myModel.summary()

#Creating optimizer
from keras.optimizers import Adamax
adamax = Adamax()

myModel.compile(loss='categorical_crossentropy',
                   optimizer=adamax,
                   metrics=['accuracy'])


#Fiting model
history = myModel.fit(
    xtrain, ytrain,
    epochs=epochs,
    validation_data=(xtest, ytest),
    callbacks = callbacks
)

我做错什么了吗,是因为一堆操作之后我的损失仍然约为1.5,准确度接近0.4

0 个答案:

没有答案