培训损失高于验证损失

时间:2018-05-17 09:18:58

标签: python tensorflow keras

我正在努力训练一个虚拟函数的回归模型,该模型具有3个变量,在Keras中具有完全连接的神经网络,并且我总是得到远远高于验证损失的训练损失。

我将数据集分成2/3用于训练,1/3用于验证。我尝试了很多不同的东西:

  • 改变架构
  • 添加更多神经元
  • 使用正则化
  • 使用不同的批量大小

训练误差仍然比验证误差高一个数量级:

Epoch 5995/6000
4020/4020 [==============================] - 0s 78us/step - loss: 1.2446e-04 - mean_squared_error: 1.2446e-04 - val_loss: 1.3953e-05 - val_mean_squared_error: 1.3953e-05
Epoch 5996/6000
4020/4020 [==============================] - 0s 98us/step - loss: 1.2549e-04 - mean_squared_error: 1.2549e-04 - val_loss: 1.5730e-05 - val_mean_squared_error: 1.5730e-05
Epoch 5997/6000
4020/4020 [==============================] - 0s 105us/step - loss: 1.2500e-04 - mean_squared_error: 1.2500e-04 - val_loss: 1.4372e-05 - val_mean_squared_error: 1.4372e-05
Epoch 5998/6000
4020/4020 [==============================] - 0s 96us/step - loss: 1.2500e-04 - mean_squared_error: 1.2500e-04 - val_loss: 1.4151e-05 - val_mean_squared_error: 1.4151e-05
Epoch 5999/6000
4020/4020 [==============================] - 0s 80us/step - loss: 1.2487e-04 - mean_squared_error: 1.2487e-04 - val_loss: 1.4342e-05 - val_mean_squared_error: 1.4342e-05
Epoch 6000/6000
4020/4020 [==============================] - 0s 79us/step - loss: 1.2494e-04 - mean_squared_error: 1.2494e-04 - val_loss: 1.4769e-05 - val_mean_squared_error: 1.4769e-05

这没有意义,请帮忙!

编辑:这是完整的代码

我有6000个培训示例

# -*- coding: utf-8 -*-
"""
Created on Mon Feb 26 13:40:03 2018

@author: Michele
"""
#from keras.datasets import reuters
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout, LSTM
from keras import optimizers
import matplotlib.pyplot as plt
import os 
import pylab 
from keras.constraints import maxnorm
from sklearn.model_selection import train_test_split
from keras import regularizers
from sklearn.preprocessing import MinMaxScaler
import math
from sklearn.metrics import mean_squared_error
import keras

# fix random seed for reproducibility
seed=7
np.random.seed(seed)

dataset = np.loadtxt("BabbaX.csv", delimiter=",")
 #split into input (X) and output (Y) variables
#x = dataset.transpose()[:,10:15] #only use power
x = dataset
del(dataset) # delete container
dataset = np.loadtxt("BabbaY.csv", delimiter=",")
 #split into input (X) and output (Y) variables
y = dataset.transpose()
del(dataset) # delete container

 #scale labels from 0 to 1
scaler = MinMaxScaler(feature_range=(0, 1))
y = np.reshape(y, (y.shape[0],1))
y = scaler.fit_transform(y)

lenData=x.shape[0]
x=np.transpose(x)

xtrain=x[:,0:round(lenData*0.67)]
ytrain=y[0:round(lenData*0.67),]
xtest=x[:,round(lenData*0.67):round(lenData*1.0)]
ytest=y[round(lenData*0.67):round(lenData*1.0)]

xtrain=np.transpose(xtrain)
xtest=np.transpose(xtest)    

l2_lambda = 0.1 #reg factor

#sequential type of model
model = Sequential() 
#stacking layers with .add
units=300
#model.add(Dense(units, input_dim=xtest.shape[1], activation='relu', kernel_initializer='normal', kernel_regularizer=regularizers.l2(l2_lambda), kernel_constraint=maxnorm(3)))
model.add(Dense(units, activation='relu', input_dim=xtest.shape[1]))
#model.add(Dropout(0.1))
model.add(Dense(units, activation='relu'))
#model.add(Dropout(0.1))
model.add(Dense(1)) #no activation function should be used for the output layer

rms = optimizers.RMSprop(lr=0.00001, rho=0.9, epsilon=None, decay=0) #It is recommended to leave the parameters
adam = keras.optimizers.Adam(lr=0.00001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=1e-6, amsgrad=False)

#of this optimizer at their default values (except the learning rate, which can be freely tuned).
#adam=keras.optimizers.Adam(lr=0.01, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)

#configure learning process with .compile
model.compile(optimizer=adam, loss='mean_squared_error', metrics=['mse'])

# fit the model (iterate on the training data in batches)
history = model.fit(xtrain, ytrain, nb_epoch=1000, batch_size=round(xtest.shape[0]/100),
              validation_data=(xtest, ytest), shuffle=True, verbose=2)

#extract weights for each layer
weights = [layer.get_weights() for layer in model.layers]

#evaluate on training data set
valuesTrain=model.predict(xtrain)

#evaluate on test data set
valuesTest=model.predict(xtest)

 #invert predictions
valuesTrain = scaler.inverse_transform(valuesTrain)
ytrain = scaler.inverse_transform(ytrain)
valuesTest = scaler.inverse_transform(valuesTest)
ytest = scaler.inverse_transform(ytest)

2 个答案:

答案 0 :(得分:1)

<强> TL; DR : 当模型学习良好且快速,验证损失可能低于训练损失,因为验证发生在更新的模型上,而培训损失没有任何(没有批次)或只有一些(有批次)更新应用

好吧,我想我发现了这里发生了什么。我使用以下代码来测试它。

import numpy as np
import keras
from keras.models import Sequential
from keras.layers import Dense
import matplotlib.pyplot as plt

np.random.seed(7)

N_DATA = 6000

x = np.random.uniform(-10, 10, (3, N_DATA))
y = x[0] + x[1]**2 + x[2]**3

xtrain = x[:, 0:round(N_DATA*0.67)]
ytrain = y[0:round(N_DATA*0.67)]

xtest = x[:, round(N_DATA*0.67):N_DATA]
ytest = y[round(N_DATA*0.67):N_DATA]

xtrain = np.transpose(xtrain)
xtest = np.transpose(xtest)

model = Sequential()
model.add(Dense(10, activation='relu', input_dim=3))
model.add(Dense(5, activation='relu'))
model.add(Dense(1))

adam = keras.optimizers.Adam()

# configure learning process with .compile
model.compile(optimizer=adam, loss='mean_squared_error', metrics=['mse'])

# fit the model (iterate on the training data in batches)
history = model.fit(xtrain, ytrain, nb_epoch=50,
                    batch_size=round(N_DATA/100),
                    validation_data=(xtest, ytest), shuffle=False, verbose=2)

plt.plot(history.history['mean_squared_error'])
plt.plot(history.history['val_loss'])
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()

这与您的代码基本相同并复制问题,这实际上不是问题。只需更改

history = model.fit(xtrain, ytrain, nb_epoch=50,
                    batch_size=round(N_DATA/100),
                    validation_data=(xtest, ytest), shuffle=False, verbose=2)

history = model.fit(xtrain, ytrain, nb_epoch=50,
                    batch_size=round(N_DATA/100),
                    validation_data=(xtrain, ytrain), shuffle=False, verbose=2)

因此,您不必使用验证数据进行验证,而是再次使用训练数据进行验证,从而导致完全相同的行为。奇怪的不是吗?实际上没有。我认为发生的事情是:

Keras在每个时期给出的初始mean_squared_error是在应用渐变之前的损失,而验证是在渐变应用之后发生的,这是有道理的。

对于通常使用NN的高度随机问题,您没有看到,因为数据变化太大以至于更新的权重不足以描述验证数据,对训练数据的轻微过度拟合影响仍然存在如此强大,即使在更新权重之后,验证损失仍然高于之前的训练损失。这只是我认为的,但我可能完全错了。

答案 1 :(得分:-2)

如果训练损失略高或接近验证损失,则意味着模型不会过度拟合。 努力总是使用最好的功能,以减少过度拟合和更好的验证和测试准确性。 您可能总是让火车损失更高的原因可能是您用来训练的功能和数据。

请参阅以下链接,观察辍学时的培训和验证损失: http://danielnouri.org/notes/2014/12/17/using-convolutional-neural-nets-to-detect-facial-keypoints-tutorial/