Keras:CNN模型无法学习

时间:2019-04-20 18:16:33

标签: python tensorflow keras neural-network deep-learning

我想训练一个模型,根据身体信号来预测一个人的情绪。我有一个物理信号并将其用作输入功能;

  

ecg(心电图)

在我的数据集中,共有 312 条记录属于参与者,并且每条记录中有 18000 行数据。因此,当我将它们组合到一个数据框中时,总共有 5616000 行。

这是我的train_x数据框;

            ecg  
0        0.1912 
1        0.3597 
2        0.3597 
3        0.3597 
4        0.3597 
5        0.3597 
6        0.2739 
7        0.1641 
8        0.0776 
9        0.0005 
10      -0.0375 
11      -0.0676 
12      -0.1071 
13      -0.1197 
..      ....... 
..      ....... 
..      ....... 
5616000 0.0226  

我有6个与情感相对应的课程。我已经用数字对这些标签进行了编码;

  

愤怒= 0,平静= 1,厌恶= 2,恐惧= 3,幸福= 4,悲伤= 5

这是我的火车;

         emotion
0              0
1              0
2              0
3              0
4              0
.              .
.              .
.              .
18001          1
18002          1
18003          1
.              .
.              .
.              .
360001         2
360002         2
360003         2
.              .
.              .
.              .
.              .
5616000        5

要养活我的CNN,我正在重塑train_x并重新编码train_y数据。

train_x = train_x.values.reshape(312,18000,1) 
train_y = train_y.values.reshape(312,18000)
train_y = train_y[:,:1]  # truncated train_y to have single corresponding value to a complete signal.
train_y = pd.DataFrame(train_y)
train_y = pd.get_dummies(train_y[0]) #one hot encoded labels

经过这些过程后,它们的外观如下: 重塑后 train_x;

[[[0.60399908]
  [0.79763273]
  [0.79763273]
  ...
  [0.09779361]
  [0.09779361]
  [0.14732245]]

 [[0.70386905]
  [0.95101687]
  [0.95101687]
  ...
  [0.41530258]
  [0.41728671]
  [0.42261905]]

 [[0.75008021]
  [1.        ]
  [1.        ]
  ...
  [0.46412148]
  [0.46412148]
  [0.46412148]]

 ...

 [[0.60977509]
  [0.7756791 ]
  [0.7756791 ]
  ...
  [0.12725148]
  [0.02755331]
  [0.02755331]]

 [[0.59939494]
  [0.75514785]
  [0.75514785]
  ...
  [0.0391334 ]
  [0.0391334 ]
  [0.0578706 ]]

 [[0.5786066 ]
  [0.71539303]
  [0.71539303]
  ...
  [0.41355098]
  [0.41355098]
  [0.4112712 ]]]
进行一次热编码后

train_y;

    0  1  2  3  4  5
0    1  0  0  0  0  0
1    1  0  0  0  0  0
2    0  1  0  0  0  0
3    0  1  0  0  0  0
4    0  0  0  0  0  1
5    0  0  0  0  0  1
6    0  0  1  0  0  0
7    0  0  1  0  0  0
8    0  0  0  1  0  0
9    0  0  0  1  0  0
10   0  0  0  0  1  0
11   0  0  0  0  1  0
12   0  0  0  1  0  0
13   0  0  0  1  0  0
14   0  1  0  0  0  0
15   0  1  0  0  0  0
16   1  0  0  0  0  0
17   1  0  0  0  0  0
18   0  0  1  0  0  0
19   0  0  1  0  0  0
20   0  0  0  0  1  0
21   0  0  0  0  1  0
22   0  0  0  0  0  1
23   0  0  0  0  0  1
24   0  0  0  0  0  1
25   0  0  0  0  0  1
26   0  0  1  0  0  0
27   0  0  1  0  0  0
28   0  1  0  0  0  0
29   0  1  0  0  0  0
..  .. .. .. .. .. ..
282  0  0  0  1  0  0
283  0  0  0  1  0  0
284  1  0  0  0  0  0
285  1  0  0  0  0  0
286  0  0  0  0  1  0
287  0  0  0  0  1  0
288  1  0  0  0  0  0
289  1  0  0  0  0  0
290  0  1  0  0  0  0
291  0  1  0  0  0  0
292  0  0  0  1  0  0
293  0  0  0  1  0  0
294  0  0  1  0  0  0
295  0  0  1  0  0  0
296  0  0  0  0  0  1
297  0  0  0  0  0  1
298  0  0  0  0  1  0
299  0  0  0  0  1  0
300  0  0  0  1  0  0
301  0  0  0  1  0  0
302  0  0  1  0  0  0
303  0  0  1  0  0  0
304  0  0  0  0  0  1
305  0  0  0  0  0  1
306  0  1  0  0  0  0
307  0  1  0  0  0  0
308  0  0  0  0  1  0
309  0  0  0  0  1  0
310  1  0  0  0  0  0
311  1  0  0  0  0  0

[312 rows x 6 columns]

重塑后,我创建了我的CNN模型;

model = Sequential()
model.add(Conv1D(100,700,activation='relu',input_shape=(18000,1))) #kernel_size is 700 because 18000 rows = 60 seconds so 700 rows = ~2.33 seconds and there is two heart beat peak in every 2 second for ecg signal.
model.add(Conv1D(50,700))
model.add(Dropout(0.5))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling1D(4))
model.add(Flatten())
model.add(Dense(6,activation='softmax'))

adam = keras.optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)

model.compile(optimizer = adam, loss = 'categorical_crossentropy', metrics = ['acc'])
model.fit(train_x,train_y,epochs = 50, batch_size = 32, validation_split=0.33, shuffle=False)

问题是,精度不会超过0.2,并且会上下波动。看起来该模型没有学到任何东西。我尝试增加层次学习率更改损失函数更改优化程序缩放数据规范化数据,但是没有什么可以帮助我解决此问题。我还尝试了更简单的Dense模型或LSTM模型,但找不到可行的方法。

如何解决此问题?预先感谢。

添加:

我想在50个纪元后添加训练结果;

Epoch 1/80
249/249 [==============================] - 24s 96ms/step - loss: 2.3118 - acc: 0.1406 - val_loss: 1.7989 - val_acc: 0.1587
Epoch 2/80
249/249 [==============================] - 19s 76ms/step - loss: 2.0468 - acc: 0.1647 - val_loss: 1.8605 - val_acc: 0.2222
Epoch 3/80
249/249 [==============================] - 19s 76ms/step - loss: 1.9562 - acc: 0.1767 - val_loss: 1.8203 - val_acc: 0.2063
Epoch 4/80
249/249 [==============================] - 19s 75ms/step - loss: 1.9361 - acc: 0.2169 - val_loss: 1.8033 - val_acc: 0.1905
Epoch 5/80
249/249 [==============================] - 19s 74ms/step - loss: 1.8834 - acc: 0.1847 - val_loss: 1.8198 - val_acc: 0.2222
Epoch 6/80
249/249 [==============================] - 19s 75ms/step - loss: 1.8278 - acc: 0.2410 - val_loss: 1.7961 - val_acc: 0.1905
Epoch 7/80
249/249 [==============================] - 19s 75ms/step - loss: 1.8022 - acc: 0.2450 - val_loss: 1.8092 - val_acc: 0.2063
Epoch 8/80
249/249 [==============================] - 19s 75ms/step - loss: 1.7959 - acc: 0.2369 - val_loss: 1.8005 - val_acc: 0.2222
Epoch 9/80
249/249 [==============================] - 19s 75ms/step - loss: 1.7234 - acc: 0.2610 - val_loss: 1.7871 - val_acc: 0.2381
Epoch 10/80
249/249 [==============================] - 19s 75ms/step - loss: 1.6861 - acc: 0.2972 - val_loss: 1.8017 - val_acc: 0.1905
Epoch 11/80
249/249 [==============================] - 19s 75ms/step - loss: 1.6696 - acc: 0.3173 - val_loss: 1.7878 - val_acc: 0.1905
Epoch 12/80
249/249 [==============================] - 19s 75ms/step - loss: 1.5868 - acc: 0.3655 - val_loss: 1.7771 - val_acc: 0.1270
Epoch 13/80
249/249 [==============================] - 19s 75ms/step - loss: 1.5751 - acc: 0.3936 - val_loss: 1.7818 - val_acc: 0.1270
Epoch 14/80
249/249 [==============================] - 19s 75ms/step - loss: 1.5647 - acc: 0.3735 - val_loss: 1.7733 - val_acc: 0.1429
Epoch 15/80
249/249 [==============================] - 19s 75ms/step - loss: 1.4621 - acc: 0.4177 - val_loss: 1.7759 - val_acc: 0.1270
Epoch 16/80
249/249 [==============================] - 19s 75ms/step - loss: 1.4519 - acc: 0.4498 - val_loss: 1.8005 - val_acc: 0.1746
Epoch 17/80
249/249 [==============================] - 19s 75ms/step - loss: 1.4489 - acc: 0.4378 - val_loss: 1.8020 - val_acc: 0.1270
Epoch 18/80
249/249 [==============================] - 19s 75ms/step - loss: 1.4449 - acc: 0.4297 - val_loss: 1.7852 - val_acc: 0.1587
Epoch 19/80
249/249 [==============================] - 19s 75ms/step - loss: 1.3600 - acc: 0.5301 - val_loss: 1.7922 - val_acc: 0.1429
Epoch 20/80
249/249 [==============================] - 19s 75ms/step - loss: 1.3349 - acc: 0.5422 - val_loss: 1.8061 - val_acc: 0.2222
Epoch 21/80
249/249 [==============================] - 19s 75ms/step - loss: 1.2885 - acc: 0.5622 - val_loss: 1.8235 - val_acc: 0.1746
Epoch 22/80
249/249 [==============================] - 19s 75ms/step - loss: 1.2291 - acc: 0.5823 - val_loss: 1.8173 - val_acc: 0.1905
Epoch 23/80
249/249 [==============================] - 19s 75ms/step - loss: 1.1890 - acc: 0.6506 - val_loss: 1.8293 - val_acc: 0.1905
Epoch 24/80
249/249 [==============================] - 19s 75ms/step - loss: 1.1473 - acc: 0.6627 - val_loss: 1.8274 - val_acc: 0.1746
Epoch 25/80
249/249 [==============================] - 19s 75ms/step - loss: 1.1060 - acc: 0.6747 - val_loss: 1.8142 - val_acc: 0.1587
Epoch 26/80
249/249 [==============================] - 19s 75ms/step - loss: 1.0210 - acc: 0.7510 - val_loss: 1.8126 - val_acc: 0.1905
Epoch 27/80
249/249 [==============================] - 19s 75ms/step - loss: 0.9699 - acc: 0.7631 - val_loss: 1.8094 - val_acc: 0.1746
Epoch 28/80
249/249 [==============================] - 19s 75ms/step - loss: 0.9127 - acc: 0.8193 - val_loss: 1.8012 - val_acc: 0.1746
Epoch 29/80
249/249 [==============================] - 19s 75ms/step - loss: 0.9176 - acc: 0.7871 - val_loss: 1.8371 - val_acc: 0.1746
Epoch 30/80
249/249 [==============================] - 19s 75ms/step - loss: 0.8725 - acc: 0.8233 - val_loss: 1.8215 - val_acc: 0.1587
Epoch 31/80
249/249 [==============================] - 19s 75ms/step - loss: 0.8316 - acc: 0.8514 - val_loss: 1.8010 - val_acc: 0.1429
Epoch 32/80
249/249 [==============================] - 19s 75ms/step - loss: 0.7958 - acc: 0.8474 - val_loss: 1.8594 - val_acc: 0.1270
Epoch 33/80
249/249 [==============================] - 19s 75ms/step - loss: 0.7452 - acc: 0.8795 - val_loss: 1.8260 - val_acc: 0.1587
Epoch 34/80
249/249 [==============================] - 19s 75ms/step - loss: 0.7395 - acc: 0.8916 - val_loss: 1.8191 - val_acc: 0.1587
Epoch 35/80
249/249 [==============================] - 19s 75ms/step - loss: 0.6794 - acc: 0.9357 - val_loss: 1.8344 - val_acc: 0.1429
Epoch 36/80
249/249 [==============================] - 19s 75ms/step - loss: 0.6106 - acc: 0.9357 - val_loss: 1.7903 - val_acc: 0.1111
Epoch 37/80
249/249 [==============================] - 19s 75ms/step - loss: 0.5609 - acc: 0.9598 - val_loss: 1.7882 - val_acc: 0.1429
Epoch 38/80
249/249 [==============================] - 19s 75ms/step - loss: 0.5788 - acc: 0.9478 - val_loss: 1.8036 - val_acc: 0.1905
Epoch 39/80
249/249 [==============================] - 19s 75ms/step - loss: 0.5693 - acc: 0.9398 - val_loss: 1.7712 - val_acc: 0.1746
Epoch 40/80
249/249 [==============================] - 19s 75ms/step - loss: 0.4911 - acc: 0.9598 - val_loss: 1.8497 - val_acc: 0.1429
Epoch 41/80
249/249 [==============================] - 19s 75ms/step - loss: 0.4824 - acc: 0.9518 - val_loss: 1.8105 - val_acc: 0.1429
Epoch 42/80
249/249 [==============================] - 19s 75ms/step - loss: 0.4198 - acc: 0.9759 - val_loss: 1.8332 - val_acc: 0.1111
Epoch 43/80
249/249 [==============================] - 19s 75ms/step - loss: 0.3890 - acc: 0.9880 - val_loss: 1.9316 - val_acc: 0.1111
Epoch 44/80
249/249 [==============================] - 19s 75ms/step - loss: 0.3762 - acc: 0.9920 - val_loss: 1.8333 - val_acc: 0.1746
Epoch 45/80
249/249 [==============================] - 19s 75ms/step - loss: 0.3510 - acc: 0.9880 - val_loss: 1.8090 - val_acc: 0.1587
Epoch 46/80
249/249 [==============================] - 19s 75ms/step - loss: 0.3306 - acc: 0.9880 - val_loss: 1.8230 - val_acc: 0.1587
Epoch 47/80
249/249 [==============================] - 19s 75ms/step - loss: 0.2814 - acc: 1.0000 - val_loss: 1.7843 - val_acc: 0.2222
Epoch 48/80
249/249 [==============================] - 19s 75ms/step - loss: 0.2794 - acc: 1.0000 - val_loss: 1.8147 - val_acc: 0.2063
Epoch 49/80
249/249 [==============================] - 19s 75ms/step - loss: 0.2430 - acc: 1.0000 - val_loss: 1.8488 - val_acc: 0.1587
Epoch 50/80
249/249 [==============================] - 19s 75ms/step - loss: 0.2216 - acc: 1.0000 - val_loss: 1.8215 - val_acc: 0.1587

8 个答案:

答案 0 :(得分:4)

我建议您退后几步,并考虑一种更简单的方法。
基于以下内容...

  

我尝试增加层数,学习率,更改损失函数,更改优化器,缩放数据,规范化数据,但是没有任何帮助我解决此问题。我还尝试了更简单的Dense模型或LSTM模型,但找不到可行的方法。

听起来您对数据和工具的理解并不那么强...这很好,因为这是学习的机会。

几个问题

  1. 您有基线模型吗?您是否尝试过仅运行多项式逻辑回归?如果没有,我强烈建议从那里开始。随着您增加模型的复杂性,经历建立这样一个模型所需的功能工程将是无价的。

  2. 您是否检查过班级失衡?

  3. 您为什么要使用CNN?您想用卷积层完成什么?对我来说,当我建立一个视觉模型来说要对壁橱中的鞋子进行分类时,我使用了几个卷积层来提取空间特征,例如边缘和曲线。

  4. 与第三个问题有关...您从何处获得此体系结构?是来自出版物吗?这是当前心电图跟踪的最新模型吗?还是这是最易于访问的模型?有时两者并不相同。我将深入研究文献,在网络上搜索更多,以找到有关神经网络和分析ECG痕迹的更多信息。

我认为,如果您能回答这些问题,您将能够自己解决问题。

答案 1 :(得分:1)

实施中的当前问题是,由于您为模型使用了形状为(312,18000,1)的数据,因此您只有312个样本,并且使用了0.33验证拆分,因此,您仅使用209个样本进行训练。

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d_1 (Conv1D)            (None, 17301, 100)        70100     
_________________________________________________________________
conv1d_2 (Conv1D)            (None, 16602, 50)         3500050   
_________________________________________________________________
dropout_1 (Dropout)          (None, 16602, 50)         0         
_________________________________________________________________
batch_normalization_1 (Batch (None, 16602, 50)         200       
_________________________________________________________________
activation_1 (Activation)    (None, 16602, 50)         0         
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 4150, 50)          0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 207500)            0         
_________________________________________________________________
dense_1 (Dense)              (None, 6)                 1245006   
=================================================================
Total params: 4,815,356
Trainable params: 4,815,256
Non-trainable params: 100
_________________________________________________________________

如我所见model.summary(),您的模型总共有4,815,256个可训练参数。因此,您的模型很容易过度拟合训练数据。问题是,没有足够的样本,您有太多参数需要学习。您可以尝试减小模型尺寸,如下所示:

model = Sequential()
model.add(Conv1D(100,2,activation='relu',input_shape=(18000,1))) 
model.add(Conv1D(10,2))
model.add(Dropout(0.5))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling1D(4))
model.add(Flatten())
model.add(Dense(6,activation='softmax'))
model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d_1 (Conv1D)            (None, 17999, 100)        300       
_________________________________________________________________
conv1d_2 (Conv1D)            (None, 17998, 10)         2010      
_________________________________________________________________
dropout_1 (Dropout)          (None, 17998, 10)         0         
_________________________________________________________________
batch_normalization_1 (Batch (None, 17998, 10)         40        
_________________________________________________________________
activation_1 (Activation)    (None, 17998, 10)         0         
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 4499, 10)          0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 44990)             0         
_________________________________________________________________
dense_1 (Dense)              (None, 6)                 269946    
=================================================================
Total params: 272,296
Trainable params: 272,276
Non-trainable params: 20
_________________________________________________________________

据我所知,您有3种类型的数据ecg,gsr和temp。因此,您可以将train_x用作(312,18000,3)。您的train_y将是(312,6)

如果上述解决方案无效,

  1. 从数据集中绘制类分布,并检查是否存在任何类 类数据不平衡。
  2. 由于模型过度拟合数据,请尝试创建更多数据(如果此数据集由您创建)或为此找到一些数据增强技术。

答案 2 :(得分:0)

我相信您的代码是正确的,但是正如评论者所说,您可能过度拟合了数据。

您可能希望在各个时期绘制验证准确性和培训准确性以使其可视化。

您应该首先考虑使用简单的模型是否可以解决您的过拟合问题。请注意,这不太可能改善您的整体表现,但是无论您的训练准确性如何,您的验证准确性都将更紧密地匹配。另一种选择是在卷积层之后立即添加一个池化层。

答案 3 :(得分:0)

在通过回调进行训练期间,您可以尝试添加正则化器(L1或L2),选中kernel_initializer和/或调整学习率。下面的示例来自回归模型。

model = Sequential()
model.add(Dense(128, input_dim=dims, activation='relu'))
model.add(Dropout(0.2))
model.add(layers.BatchNormalization())
model.add(Dense(16, activation='relu', kernel_initializer='normal', kernel_regularizer=regularizers.l1(x)))
model.add(Dropout(0.2))
model.add(layers.BatchNormalization())
model.add(Dense(1, kernel_initializer='normal'))

model.compile(optimizer=optimizers.adam(lr=l), loss='mean_squared_error')

reduce_lr = ReduceLROnPlateau(monitor='val_loss', mode='min', factor=0.5, patience=3, min_lr=0.000001, verbose=1, cooldown=0)

history = model.fit(xtrain, ytrain, epochs=epochs, batch_size=batch_size, validation_split=0.3, callbacks=[reduce_lr])

答案 4 :(得分:0)

我怀疑train_y的预处理方式是否无法正确地与train_x同步。我的问题是,您是否遵循基于频率的技术来压缩y_train?
我认为,如果您已经通过基于频率的技术压缩了标签(每一行),那么您已经对数据引入了高偏差。让我知道压缩是如何完成的!谢谢

答案 5 :(得分:0)

我建议以下内容:

  1. 我看到数据点的数量减少了。问题的复杂性越高,深度学习模型的学习就需要更多的数据点。 寻找包含大量数据的类似数据集。在该数据集上训练网络并将其转移到您的问题。

  2. 是否有扩充数据的方法?我看到您的信号长度为18000。您可以使用不同的技术对数据进行一半采样,然后扩展数据集。您将使用长度为9000的信号。

  3. 尝试将卷积核的长度减少到3或5,并通过添加另一个conv层来增加模型深度。

  4. 我强烈建议尝试使用随机森林和梯度增强树,看看它们的性能。

答案 6 :(得分:0)

一年前我在大学里做完最后的作业时遇到了心电图问题,但是方法和数据(MIT-BIH)不同。

似乎您使用的是单根引线,不是吗?您是否已尝试在清理数据之前准备好数据(注意心跳噪声)?我的建议是,不要将所有数据合并到一个单独的列表中进行训练,由于人的心跳的性质,这种情况可能会过分发生,请尝试根据性别或年龄进行训练。在某些文献中,它很有帮助。

模型不能正常工作,不是因为实现错误,而是有时我们如何准备好数据。

答案 7 :(得分:0)

您的模型显然过度拟合了数据集。在评论者中没有人考虑的一种建议是增加步伐。在这里,您有kernel size = 700,没有填充和stride = 1。因此,您将从第一个Conv层获得形状为(None, 17301, 100)的输出。

我会尝试将步幅增加到50到100的数量级(将您的内核移动2.33/(700/stride)秒的一部分),或者在每个Conv层之后插入一个Pooling层。