我使用CNN的心电图数据损失= nan和accruacy = 0

时间:2020-07-16 10:11:23

标签: python tensorflow keras cnn

我正在使用数据帧为[4992 * 1]的1d卷积神经网络。在短短的1个时期内,我的准确度为0,而NAN损失为

这是我的数据

 1.dat    10.dat   100.dat  ...       318.dat   319.dat    32.dat
0    -0.066321  0.089496  1.105313  ...  3.900183e+21  0.211655 -0.304877
1    -0.068065  0.409170  1.022586  ... -1.415044e+21  0.231534 -0.422368
2    -0.077092  0.757832  1.019365  ...  3.887821e+21  0.396038 -0.370392
3    -0.090718  0.990959  1.082419  ...  1.673943e+22  0.654156 -0.110462
4    -0.086334  0.856819  0.962128  ...  2.155512e+22  0.679361  0.098806
...        ...       ...       ...  ...           ...       ...       ...
4989       NaN       NaN  0.992868  ... -9.179290e+20  0.242418       NaN
4990       NaN       NaN  1.012630  ...  6.634378e+21  0.110071       NaN
4991       NaN       NaN  1.026575  ...  1.254544e+22  0.060935       NaN
4992       NaN       NaN  1.055535  ...  1.383217e+22  0.085435       NaN
4993       NaN       NaN  1.110496  ...  1.069024e+22  0.164618       NaN

print(X_train.shape,y_train.shape)

出:(246,4992,1)(246,4992,1)

from keras.layers import Activation,BatchNormalization
from keras.layers import Conv1D,MaxPooling1D,UpSampling1D
from keras.optimizers import RMSprop
timesteps = 4992
features = 1
def create_Conv1D():
  inputs = Input(shape = (timesteps,features))
  activation = 'softmax'
  encoded = Conv1D(32,3, padding = 'same')(inputs)
  encoded = BatchNormalization()(encoded)
  encoded = Activation(activation)(encoded)
  encoded = MaxPooling1D(2,padding = 'same')(encoded)
  
  
  encoded = Conv1D(16,3,padding = 'same')(encoded)
  encoded = BatchNormalization()(encoded)
  encoded = Activation(activation)(encoded)
  encoded = MaxPooling1D(2,padding = 'same')(encoded)
  
  encoded = Conv1D(8,3, padding = 'same')(encoded)
  encoded = BatchNormalization()(encoded)
  encoded = Activation(activation)(encoded)
  encoded = MaxPooling1D(2,padding = 'same')(encoded)
  
  
  decoded = Conv1D(8,3,padding = 'same')(encoded)
  encoded = BatchNormalization()(encoded)
  decoded = Activation(activation)(decoded)
  decoded = UpSampling1D(2)(decoded)
  
  decoded = Conv1D(16,3, padding = 'same')(decoded)
  encoded = BatchNormalization()(encoded)
  decoded = Activation(activation)(decoded)
  decoded = UpSampling1D(2)(decoded)
  
  decoded = Conv1D(32,3,padding = 'same')(decoded)
  encoded = BatchNormalization()(encoded)
  decoded = Activation(activation)(decoded)
  decoded = UpSampling1D(2)(decoded)
  
  decoded = Conv1D(1,3,activation = activation,padding = 'same')(decoded)
  #decoded = Dense(1, activation='softmax', init='he_normal', name='output')(decoded)
  
  autoencoder = Model(inputs,decoded)
  
  return autoencoder

autoencoder = create_Conv1D()

adam = keras.optimizers.Adam(lr=0.001)
autoencoder.compile(optimizer = adam, loss = 'binary_crossentropy', metrics=['accuracy'])
#autoencoder.compile(loss = "binary_crossentropy", optimizer = sgd, metrics=['accuracy'])

autoencoder.summary()

autoencoder.fit(X_train, y_train, epochs=1, batch_size=128, verbose=1)

那就是我得到的

第1/1章 246/246 [==============================]-13s 52ms / step-损耗:nan-精度:0.0000e + 00

我的代码有什么问题,请帮助我在一周内陷入困境

1 个答案:

答案 0 :(得分:0)

您的数据中可能还会有一些inf-inf值。将其替换为nan,然后替换为0

X_train.replace([np.inf, -np.inf], np.nan,inplace=True)
X_train = X_train.fillna(0)