当我使用keras fit_generator时模型没有学习任何东西

时间:2018-06-22 10:52:31

标签: python tensorflow neural-network keras

我有一个很大的数据集。我无法将整个数据加载到RAM中。

我开始使用fit_generator训练模型并编写了自定义生成器函数,然后开始训练模型。我训练了100个时代的模型。

这里的损耗值很奇怪。有时损失值尚未全部改变,或者在前几个时期,损失值在某些点损失值持续增加之后逐渐减少。

当我在model.fit()中使用相同的模型架构和相同的数据集时,损耗值正在减小。

有人可以帮我找出这里的问题吗?

这是我的code

波纹管记录是恒定损耗值的示例

Epoch 1/100 423/423 [==============================] - 137s 325ms/step - loss: 1.8152 - mean_squared_error: 0.0049 - acc: 0.6175
Epoch 2/100 423/423 [==============================] - 129s 304ms/step - loss: 1.9417 - mean_squared_error: 0.0051 - acc: 0.5940
Epoch 3/100 423/423 [==============================] - 128s 303ms/step - loss: 1.9391 - mean_squared_error: 0.0051 - acc: 0.5968
Epoch 4/100 423/423 [==============================] - 128s 303ms/step - loss: 1.9169 - mean_squared_error: 0.0051 - acc: 0.5965
Epoch 5/100 423/423 [==============================] - 128s 303ms/step - loss: 1.9513 - mean_squared_error: 0.0051 - acc: 0.5956
Epoch 6/100 423/423 [==============================] - 128s 303ms/step - loss: 1.9201 - mean_squared_error: 0.0051 - acc: 0.6005
Epoch 7/100 423/423 [==============================] - 128s 303ms/step - loss: 1.9341 - mean_squared_error: 0.0051 - acc: 0.5987

以下日志是增加损失值的示例

Epoch 1/100 423/423 [==============================] - 192s 454ms/step - loss: 1.8386 - mean_squared_error: 0.0049 - acc: 0.6136
Epoch 2/100 423/423 [==============================] - 186s 439ms/step - loss: 1.8087 - mean_squared_error: 0.0043 - acc: 0.6201
Epoch 3/100 423/423 [==============================] - 184s 436ms/step - loss: 1.3863 - mean_squared_error: 0.0037 - acc: 0.6445
Epoch 4/100 423/423 [==============================] - 185s 438ms/step - loss: 1.1163 - mean_squared_error: 0.0032 - acc: 0.6856
Epoch 5/100 423/423 [==============================] - 186s 439ms/step - loss: 1.0246 - mean_squared_error: 0.0030 - acc: 0.7058
Epoch 6/100 423/423 [==============================] - 166s 392ms/step - loss: 1.0277 - mean_squared_error: 0.0030 - acc: 0.7130
Epoch 7/100 423/423 [==============================] - 186s 441ms/step - loss: 0.9387 - mean_squared_error: 0.0028 - acc: 0.7244
Epoch 8/100 423/423 [==============================] - 187s 443ms/step - loss: 0.9164 - mean_squared_error: 0.0028 - acc: 0.7305
Epoch 21/100 423/423 [==============================] - 188s 444ms/step - loss: 0.8649 - mean_squared_error: 0.0027 - acc: 0.7429
Epoch 26/100 423/423 [==============================] - 188s 444ms/step - loss: 1.1365 - mean_squared_error: 0.0032 - acc: 0.7036
Epoch 27/100 423/423 [==============================] - 188s 444ms/step - loss: 3.8524 - mean_squared_error: 0.0065 - acc: 0.4976
Epoch 28/100 423/423 [==============================] - 188s 444ms/step - loss: 2.6476 - mean_squared_error: 0.0057 - acc: 0.5406
Epoch 29/100 423/423 [==============================] - 186s 440ms/step - loss: 1.5818 - mean_squared_error: 0.0044 - acc: 0.6141
Epoch 30/100 423/423 [==============================] - 187s 443ms/step - loss: 2.4326 - mean_squared_error: 0.0056 - acc: 0.5285
Epoch 31/100 423/423 [==============================] - 188s 444ms/step - loss: 3.3618 - mean_squared_error: 0.0065 - acc: 0.4919
Epoch 32/100 423/423 [==============================] - 188s 444ms/step - loss: 4.7409 - mean_squared_error: 0.0072 - acc: 0.4452
Epoch 33/100 423/423 [==============================] - 186s 439ms/step - loss: 5.4348 - mean_squared_error: 0.0080 - acc: 0.4243  
Epoch 34/100 423/423 [==============================] - 188s 444ms/step - loss: 6.1750 - mean_squared_error: 0.0084 - acc: 0.4448

0 个答案:

没有答案