整数序列学习的LSTM模型的NaN损失

时间:2017-11-29 16:35:01

标签: python deep-learning keras lstm

我正在训练LSTM模型来预测下一个整数序列的数量。我的输入格式为

Id  Sequence
3   1,3,13,87,1053,28576,2141733,508147108,402135275365,1073376057490373,9700385489355970183,298434346895322960005291,31479360095907908092817694945,11474377948948020660089085281068730
7   1,2,1,5,5,1,11,16,7,1,23,44,30,9,1,47,112,104,48,11,1,95,272,320,200,70,13,1,191,640,912,720,340,96,15,1,383,1472,2464,2352,1400,532,126,17,1,767,3328,6400,7168,5152,2464,784,160,19,1,1535,7424
8   1,2,4,5,8,10,16,20,32,40,64,80,128,160,256,320,512,640,1024,1280,2048,2560,4096,5120,8192,10240,16384,20480,32768,40960,65536,81920,131072,163840,262144,327680,524288,655360,1048576,1310720,2097152
11  1,8,25,83,274,2275,132224,1060067,3312425,10997342,36304451,301432950,17519415551,140456757358,438889687625,1457125820233,4810267148324,39939263006825,2321287521544174,18610239435360217
13  111,112,211,134,321,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000
15  1,1,1,1,1,1,1,1,1,5,1,1,1,1,5,5,1,1,1,1,11,5,5,11,5,1,1,1,1,5,23,5,23,5,5,1,1,1,1,21,5,39,5,5,39,5,21,5,1,1,1,1,5,1,17,1,17,1,1,5,1,1,1,1,31,5,5,29,1,1,29,1,5

我的代码如下:

def parse_fake(train):
    final_set = []
    for i in range(len(train)):
        x = []
        x1 = list(np.array(train.iloc[i].split(","), dtype = np.float))
        prefix = x1[:-1]
        suffix = x1[-1]
        x = [0,prefix,suffix]
        final_set.append(x)
    return final_set



dataset = pd.read_csv('G:/Python/integer_sequencing/sample.csv', index_col = 'Id')
train_data= parse_fake(dataset.Sequence) 
df = pd.DataFrame(train_data, columns=['sid', 'prefix', 'suffix'])
df['suffix'].fillna(0, inplace=True)
#df.to_csv('testcsv.csv')
#print(df)
maxlen = int(round(df.prefix.apply(len).min()))
X_train, X_test, y_train, y_test = skcv.train_test_split(df['prefix'].values, df['suffix'].values, test_size=.1)

X_train = pad_sequences(X_train, dtype='float', maxlen=maxlen)
X_test = pad_sequences(X_test, dtype='float', maxlen=maxlen)
print(X_train.shape)

X_train = X_train.reshape(X_train.shape + (1,))
X_test = X_test.reshape(X_test.shape + (1,))

print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
print('Build model...')
model = Sequential()
model.add(LSTM(128, input_shape= (maxlen, 1), return_sequences=True))
model.add(LSTM(128, input_shape= (maxlen, 1), return_sequences=True))
model.add(Flatten())
model.add(Dropout(0.2))
model.add(Dense(1, input_shape= (maxlen, 1)))
model.summary()

# try using different optimizers and different optimizer configs
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
print('Train...')
model.fit(X_train, y_train, batch_size=32, epochs=5)
#print(model.evaluate(X_test_rshp, y_test))

每当我尝试训练我的模型时,它都会让我失去NaN。我该如何解决? 可以找到样本数据集here。 你能解释为什么会发生NaN损失吗?我应该采取哪些步骤或更正我的代码以避免它?

0 个答案:

没有答案
相关问题