如何使用Keras过度拟合数据?

时间:2020-04-16 14:30:19

标签: machine-learning keras neural-network tf.keras

我正在尝试使用keras和tensorflow建立一个简单的回归模型。在我的问题中,我有(x, y)格式的数据,其中xy只是数字。我想建立一个keras模型,以便使用y作为输入来预测x

由于我认为图像可以更好地说明问题,因此这些是我的数据:

enter image description here

我们可能会讨论它们是否好,但由于我的问题,我无法真正欺骗他们。

我的keras模型如下(数据被分成30%的测试(X_test, y_test)和70%的训练(X_train, y_train)):

model = tf.keras.Sequential()

model.add(tf.keras.layers.Dense(32, input_shape=() activation="relu", name="first_layer"))
model.add(tf.keras.layers.Dense(16, activation="relu", name="second_layer"))
model.add(tf.keras.layers.Dense(1, name="output_layer"))

model.compile(loss = "mean_squared_error", optimizer = "adam", metrics=["mse"] )

history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=0, shuffle=False) 
eval_result = model.evaluate(X_test, y_test)
print("\n\nTest loss:", eval_result, "\n")

predict_Y = model.predict(X)

注意:X同时包含X_testX_train

绘制我得到的预测(蓝色方块是预测predict_Y

enter image description here

我在图层,激活功能和其他参数方面玩得很开心。我的目标是找到最佳参数来训练模型,但是这里的实际问题略有不同:实际上,我很难迫使模型过拟合数据(如您从上述结果中所见)。 / p>

有人对重新制作过度拟合有某种想法吗?

这是我想要得到的结果: enter image description here

(红色点在蓝色方块下!)

编辑:

在这里,我为您提供了上面示例中使用的数据:您可以将粘贴直接复制到python解释器中:

X_train = [0.704619794270697, 0.6779457393024553, 0.8207082120250023, 0.8588819357831449, 0.8692320257603844, 0.6878750931810429, 0.9556331888763945, 0.77677964510883, 0.7211381534179618, 0.6438319113259414, 0.6478339581502052, 0.9710222750072649, 0.8952188423349681, 0.6303124926673513, 0.9640316662124185, 0.869691568491902, 0.8320164648420931, 0.8236399177660375, 0.8877334038470911, 0.8084042532069621, 0.8045680821762038]
y_train = [0.7766424210611557, 0.8210846773655833, 0.9996114311913593, 0.8041331063189883, 0.9980525368790883, 0.8164056182686034, 0.8925487603333683, 0.7758207470960685, 0.37345286573743475, 0.9325789202459493, 0.6060269037514895, 0.9319771743389491, 0.9990691225991941, 0.9320002808310418, 0.9992560731072977, 0.9980241561997089, 0.8882905258641204, 0.4678339275898943, 0.9312152374846061, 0.9542371205095945, 0.8885893668675711]
X_test = [0.9749191829308574, 0.8735366740730178, 0.8882783211709133, 0.8022891400991644, 0.8650601322313454, 0.8697902997857514, 1.0, 0.8165876695985228, 0.8923841531760973]
y_test = [0.975653685270635, 0.9096752789481569, 0.6653736469114154, 0.46367666660348744, 0.9991817903431941, 1.0, 0.9111205717076893, 0.5264993912088891, 0.9989199241685126]
X = [0.704619794270697, 0.77677964510883, 0.7211381534179618, 0.6478339581502052, 0.6779457393024553, 0.8588819357831449, 0.8045680821762038, 0.8320164648420931, 0.8650601322313454, 0.8697902997857514, 0.8236399177660375, 0.6878750931810429, 0.8923841531760973, 0.8692320257603844, 0.8877334038470911, 0.8735366740730178, 0.8207082120250023, 0.8022891400991644, 0.6303124926673513, 0.8084042532069621, 0.869691568491902, 0.9710222750072649, 0.9556331888763945, 0.8882783211709133, 0.8165876695985228, 0.6438319113259414, 0.8952188423349681, 0.9749191829308574, 1.0, 0.9640316662124185]
Y = [0.7766424210611557, 0.7758207470960685, 0.37345286573743475, 0.6060269037514895, 0.8210846773655833, 0.8041331063189883, 0.8885893668675711, 0.8882905258641204, 0.9991817903431941, 1.0, 0.4678339275898943, 0.8164056182686034, 0.9989199241685126, 0.9980525368790883, 0.9312152374846061, 0.9096752789481569, 0.9996114311913593, 0.46367666660348744, 0.9320002808310418, 0.9542371205095945, 0.9980241561997089, 0.9319771743389491, 0.8925487603333683, 0.6653736469114154, 0.5264993912088891, 0.9325789202459493, 0.9990691225991941, 0.975653685270635, 0.9111205717076893, 0.9992560731072977]

其中X包含x值的列表,而Y包含相应的y值。 (X_test,y_test)和(X_train,y_train)是(X,Y)的两个(不重叠)子集。

要预测和显示模型结果,我只需使用matplotlib(作为plt导入):

predict_Y = model.predict(X)
plt.plot(X, Y, "ro", X, predict_Y, "bs")
plt.show()

3 个答案:

答案 0 :(得分:2)

过度拟合的模型在现实生活中很少有用。在我看来,OP很清楚这一点,但希望了解NN是否确实能够拟合(有界)任意函数。一方面,示例中的输入/输出数据似乎没有遵循可识别的模式。另一方面,输入和输出都是[0,1]中的标量,并且训练集中只有21个数据点。

根据我的实验和结果,我们确实可以根据要求进行过度拟合。请参见下图。

enter image description here

数值结果:

           x    y_true    y_pred     error
0   0.704620  0.776642  0.773753 -0.002889
1   0.677946  0.821085  0.819597 -0.001488
2   0.820708  0.999611  0.999813  0.000202
3   0.858882  0.804133  0.805160  0.001026
4   0.869232  0.998053  0.997862 -0.000190
5   0.687875  0.816406  0.814692 -0.001714
6   0.955633  0.892549  0.893117  0.000569
7   0.776780  0.775821  0.779289  0.003469
8   0.721138  0.373453  0.374007  0.000554
9   0.643832  0.932579  0.912565 -0.020014
10  0.647834  0.606027  0.607253  0.001226
11  0.971022  0.931977  0.931549 -0.000428
12  0.895219  0.999069  0.999051 -0.000018
13  0.630312  0.932000  0.930252 -0.001748
14  0.964032  0.999256  0.999204 -0.000052
15  0.869692  0.998024  0.997859 -0.000165
16  0.832016  0.888291  0.887883 -0.000407
17  0.823640  0.467834  0.460728 -0.007106
18  0.887733  0.931215  0.932790  0.001575
19  0.808404  0.954237  0.960282  0.006045
20  0.804568  0.888589  0.906829  0.018240
{'me': -0.00015776709314323828, 
 'mae': 0.00329163070145315, 
 'mse': 4.0713782563067185e-05, 
 'rmse': 0.006380735268216915}

OP的代码对我来说似乎很好。我的更改很小:

  1. 使用更深层次的网络。可能实际上并不一定需要使用30层的深度,但是由于我们只是想过度拟合,因此我没有对最低深度的要求进行过多的实验。
  2. 每个密集层有50个单位。再次,这可能是过分的。
  3. 每第5个致密层添加批处理归一化层。
  4. 学习率降低了一半。
  5. 使用批处理中的全部21个训练示例进行更长的优化。
  6. 使用MAE作为目标函数。 MSE很好,但是由于我们要过度拟合,因此我想像对待大错误一样对小错误进行惩罚。
  7. 随机数在这里更为重要,因为数据似乎是任意的。但是,如果您更改随机数种子并让优化程序运行足够长的时间,您应该会得到类似的结果。在某些情况下,优化的确陷入了局部最小值,并且不会产生过拟合(按照OP的要求)。

代码在下面。

import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense, BatchNormalization
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
import matplotlib.pyplot as plt

# Set seed just to have reproducible results
np.random.seed(84)
tf.random.set_seed(84)

# Load data from the post
# https://stackoverflow.com/questions/61252785/how-to-overfit-data-with-keras
X_train = np.array([0.704619794270697, 0.6779457393024553, 0.8207082120250023,
                    0.8588819357831449, 0.8692320257603844, 0.6878750931810429,
                    0.9556331888763945, 0.77677964510883, 0.7211381534179618,
                    0.6438319113259414, 0.6478339581502052, 0.9710222750072649,
                    0.8952188423349681, 0.6303124926673513, 0.9640316662124185,
                    0.869691568491902, 0.8320164648420931, 0.8236399177660375,
                    0.8877334038470911, 0.8084042532069621,
                    0.8045680821762038])
Y_train = np.array([0.7766424210611557, 0.8210846773655833, 0.9996114311913593,
                    0.8041331063189883, 0.9980525368790883, 0.8164056182686034,
                    0.8925487603333683, 0.7758207470960685,
                    0.37345286573743475, 0.9325789202459493,
                    0.6060269037514895, 0.9319771743389491, 0.9990691225991941,
                    0.9320002808310418, 0.9992560731072977, 0.9980241561997089,
                    0.8882905258641204, 0.4678339275898943, 0.9312152374846061,
                    0.9542371205095945, 0.8885893668675711])
X_test = np.array([0.9749191829308574, 0.8735366740730178, 0.8882783211709133,
                   0.8022891400991644, 0.8650601322313454, 0.8697902997857514,
                   1.0, 0.8165876695985228, 0.8923841531760973])
Y_test = np.array([0.975653685270635, 0.9096752789481569, 0.6653736469114154,
                   0.46367666660348744, 0.9991817903431941, 1.0,
                   0.9111205717076893, 0.5264993912088891, 0.9989199241685126])
X = np.array([0.704619794270697, 0.77677964510883, 0.7211381534179618,
              0.6478339581502052, 0.6779457393024553, 0.8588819357831449,
              0.8045680821762038, 0.8320164648420931, 0.8650601322313454,
              0.8697902997857514, 0.8236399177660375, 0.6878750931810429,
              0.8923841531760973, 0.8692320257603844, 0.8877334038470911,
              0.8735366740730178, 0.8207082120250023, 0.8022891400991644,
              0.6303124926673513, 0.8084042532069621, 0.869691568491902,
              0.9710222750072649, 0.9556331888763945, 0.8882783211709133,
              0.8165876695985228, 0.6438319113259414, 0.8952188423349681,
              0.9749191829308574, 1.0, 0.9640316662124185])
Y = np.array([0.7766424210611557, 0.7758207470960685, 0.37345286573743475,
              0.6060269037514895, 0.8210846773655833, 0.8041331063189883,
              0.8885893668675711, 0.8882905258641204, 0.9991817903431941, 1.0,
              0.4678339275898943, 0.8164056182686034, 0.9989199241685126,
              0.9980525368790883, 0.9312152374846061, 0.9096752789481569,
              0.9996114311913593, 0.46367666660348744, 0.9320002808310418,
              0.9542371205095945, 0.9980241561997089, 0.9319771743389491,
              0.8925487603333683, 0.6653736469114154, 0.5264993912088891,
              0.9325789202459493, 0.9990691225991941, 0.975653685270635,
              0.9111205717076893, 0.9992560731072977])

# Reshape all data to be of the shape (batch_size, 1)
X_train = X_train.reshape((-1, 1))
Y_train = Y_train.reshape((-1, 1))
X_test = X_test.reshape((-1, 1))
Y_test = Y_test.reshape((-1, 1))
X = X.reshape((-1, 1))
Y = Y.reshape((-1, 1))

# Is data scaled? NNs do well with bounded data.
assert np.all(X_train >= 0) and np.all(X_train <= 1)
assert np.all(Y_train >= 0) and np.all(Y_train <= 1)
assert np.all(X_test >= 0) and np.all(X_test <= 1)
assert np.all(Y_test >= 0) and np.all(Y_test <= 1)
assert np.all(X >= 0) and np.all(X <= 1)
assert np.all(Y >= 0) and np.all(Y <= 1)

# Build a model with variable number of hidden layers.
# We will use Keras functional API.
# https://www.perfectlyrandom.org/2019/06/24/a-guide-to-keras-functional-api/
n_dense_layers = 30  # increase this to get more complicated models

# Define the layers first.
input_tensor = Input(shape=(1,), name='input')
layers = []
for i in range(n_dense_layers):
    layers += [Dense(units=50, activation='relu', name=f'dense_layer_{i}')]
    if (i > 0) & (i % 5 == 0):
        # avg over batches not features
        layers += [BatchNormalization(axis=1)]
sigmoid_layer = Dense(units=1, activation='sigmoid', name='sigmoid_layer')

# Connect the layers using Keras Functional API
mid_layer = input_tensor
for dense_layer in layers:
    mid_layer = dense_layer(mid_layer)
output_tensor = sigmoid_layer(mid_layer)
model = Model(inputs=[input_tensor], outputs=[output_tensor])
optimizer = Adam(learning_rate=0.0005)
model.compile(optimizer=optimizer, loss='mae', metrics=['mae'])
model.fit(x=[X_train], y=[Y_train], epochs=40000, batch_size=21)

# Predict on various datasets
Y_train_pred = model.predict(X_train)

# Create a dataframe to inspect results manually
train_df = pd.DataFrame({
    'x': X_train.reshape((-1)),
    'y_true': Y_train.reshape((-1)),
    'y_pred': Y_train_pred.reshape((-1))
})
train_df['error'] = train_df['y_pred'] - train_df['y_true']
print(train_df)

# A dictionary to store all the errors in one place.
train_errors = {
    'me': np.mean(train_df['error']),
    'mae': np.mean(np.abs(train_df['error'])),
    'mse': np.mean(np.square(train_df['error'])),
    'rmse': np.sqrt(np.mean(np.square(train_df['error']))),
}
print(train_errors)

# Make a plot to visualize true vs predicted
plt.figure(1)
plt.clf()
plt.plot(train_df['x'], train_df['y_true'], 'r.', label='y_true')
plt.plot(train_df['x'], train_df['y_pred'], 'bo', alpha=0.25, label='y_pred')
plt.grid(True)
plt.xlabel('x')
plt.ylabel('y')
plt.title(f'Train data. MSE={np.round(train_errors["mse"], 5)}.')
plt.legend()
plt.show(block=False)
plt.savefig('true_vs_pred.png')

答案 1 :(得分:0)

您可能会遇到的一个问题是,您没有足够的训练数据来使模型能够很好地拟合。在您的示例中,您只有21个训练实例,每个实例只有1个功能。广义地说,对于神经网络模型,您需要大约10K或更多的训练实例才能生成一个体面的模型。

请考虑以下代码,该代码生成一个嘈杂的正弦波,并尝试训练一个紧密连接的前馈神经网络以适合数据。我的模型有两个线性层,每个层有50个隐藏单元和一个ReLU激活函数。实验使用变量num_points进行了参数设置,我将增加该变量。

import tensorflow as tf
from tensorflow import keras

from tensorflow.keras import layers
import numpy as np
import matplotlib.pyplot as plt


np.random.seed(7)

def generate_data(num_points=100):
    X = np.linspace(0.0 , 2.0 * np.pi, num_points).reshape(-1, 1)
    noise = np.random.normal(0, 1, num_points).reshape(-1, 1)
    y = 3 * np.sin(X) + noise
    return X, y

def run_experiment(X_train, y_train, X_test, batch_size=64):
    num_points = X_train.shape[0]

    model = keras.Sequential()
    model.add(layers.Dense(50, input_shape=(1, ), activation='relu'))
    model.add(layers.Dense(50, activation='relu'))
    model.add(layers.Dense(1, activation='linear'))
    model.compile(loss = "mse", optimizer = "adam", metrics=["mse"] )
    history = model.fit(X_train, y_train, epochs=10,
                        batch_size=batch_size, verbose=0)

    yhat = model.predict(X_test, batch_size=batch_size)
    plt.figure(figsize=(5, 5))
    plt.plot(X_train, y_train, "ro", markersize=2, label='True')
    plt.plot(X_train, yhat, "bo", markersize=1, label='Predicted')
    plt.ylim(-5, 5)
    plt.title('N=%d points' % (num_points))
    plt.legend()
    plt.grid()
    plt.show()

这是我调用代码的方式:

num_points = 100
X, y = generate_data(num_points)
run_experiment(X, y, X)

现在,如果我使用num_points = 100进行实验,则模型预测(蓝色)在拟合真实的正弦波(红色)方面做得很糟糕。

enter image description here

现在,这里是num_points = 1000

enter image description here

这里是num_points = 10000

enter image description here

这是num_points = 100000

enter image description here

如您所见,对于我选择的NN体系结构,添加更多的训练实例可以使神经网络更好地(过度)拟合数据。

如果确实有很多训练实例,那么如果您想要有针对性地过度拟合数据,则可以增加神经网络容量或减少正则化。具体来说,您可以控制以下旋钮:

  • 增加层数
  • 增加隐藏单位的数量
  • 增加每个数据实例的功能数量
  • 减少正则化(例如,通过删除辍学层)
  • 使用更复杂的神经网络架构(例如,用变压器块代替RNN)

您可能想知道神经网络是否可以拟合任意数据,而不仅仅是像我的示例中那样的正弦波。先前的研究表明,是的,足够大的神经网络可以容纳任何数据。参见:

答案 2 :(得分:-1)

如评论中所述,您应该像这样制作一个Python数组(使用NumPy):-

Myarray = [[0.65, 1], [0.85, 0.5], ....] 

然后,您将只调用需要预测的数组的那些特定部分。在这里,第一个值是x轴值。因此,您可以调用它来获取存储在Myarray

中的对应对

有很多资源可以学习这些类型的东西。其中一些是===>

  1. https://www.geeksforgeeks.org/python-using-2d-arrays-lists-the-right-way/

  2. https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=video&cd=2&cad=rja&uact=8&ved=0ahUKEwjGs-Oxne3oAhVlwTgGHfHnDp4QtwIILTAB&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DQgfUT7i4yrc&usg=AOvVaw3LympYRszIYi6_OijMXH72