我的工作说明。
培训模型代码的快照如下:
# design network
model = Sequential()
model.add(LSTM(n_neurons, input_shape=(n_seq, n_features)))
model.add(Dense(n_seq))
for i in range(0,10):
model.compile(loss='mae', optimizer='adam', metrics=['accuracy'])
# Prepare data for LSTM
train_X, train_y, test_X, test_y =
prepare_training_data(each time for seperate sequence)
history = model.fit(train_X, train_y, epochs=nb_epoch,
batch_size=n_batch, validation_data=(test_X, test_y), verbose=2, shuffle=False)
但问题是在每次合适的呼叫之后我的LSTM模型的网络权重正在重置(根据我的理解和观察)而不是保留网络的先前权重?
如果我错在哪里,请纠正我?
请帮助我弄清楚如何传播先前训练过的网络权重,以便对新序列进行微调?
我在下面附上我的发现。
序列1:
Train on 448 samples, validate on 128 samples
Epoch 1/10
- 0s - loss: 0.3213 - acc: 0.3795 - val_loss: 0.1987 - val_acc: 0.4219
Epoch 2/10
- 0s - loss: 0.2046 - acc: 0.3460 - val_loss: 0.1980 - val_acc: 0.3047
Epoch 3/10
- 0s - loss: 0.1766 - acc: 0.3192 - val_loss: 0.1874 - val_acc: 0.4141
Epoch 4/10
- 0s - loss: 0.1773 - acc: 0.3571 - val_loss: 0.1867 - val_acc: 0.3047
Epoch 5/10
- 0s - loss: 0.1728 - acc: 0.3192 - val_loss: 0.1840 - val_acc: 0.2266
Epoch 6/10
- 0s - loss: 0.1660 - acc: 0.3571 - val_loss: 0.1843 - val_acc: 0.3203
Epoch 7/10
- 0s - loss: 0.1641 - acc: 0.3304 - val_loss: 0.1834 - val_acc: 0.3828
Epoch 8/10
- 0s - loss: 0.1599 - acc: 0.4330 - val_loss: 0.1835 - val_acc: 0.4219
Epoch 9/10
- 0s - loss: 0.1576 - acc: 0.4241 - val_loss: 0.1834 - val_acc: 0.4531
Epoch 10/10
- 0s - loss: 0.1555 - acc: 0.4487 - val_loss: 0.1829 - val_acc: 0.4766
acc: 47.66%`
序列2:
Train on 512 samples, validate on 160 samples
Epoch 1/10
Epoch 00001: val_acc improved from -inf to 0.45000, saving model to model.h5
- 1s - loss: 0.1473 - acc: 0.3828 - val_loss: 0.1546 - val_acc: 0.4500
Epoch 2/10
Epoch 00002: val_acc did not improve
- 0s - loss: 0.1433 - acc: 0.3867 - val_loss: 0.1553 - val_acc: 0.4250
Epoch 3/10
Epoch 00003: val_acc improved from 0.45000 to 0.45625, saving model to model.h5
- 0s - loss: 0.1432 - acc: 0.4180 - val_loss: 0.1535 - val_acc: 0.4562
Epoch 4/10
Epoch 00004: val_acc improved from 0.45625 to 0.47500, saving model to model.h5
- 0s - loss: 0.1421 - acc: 0.4238 - val_loss: 0.1545 - val_acc: 0.4750
Epoch 5/10
Epoch 00005: val_acc did not improve
- 0s - loss: 0.1413 - acc: 0.4004 - val_loss: 0.1564 - val_acc: 0.4562
Epoch 6/10
Epoch 00006: val_acc improved from 0.47500 to 0.51250, saving model to model.h5
- 0s - loss: 0.1405 - acc: 0.4258 - val_loss: 0.1562 - val_acc: 0.5125
Epoch 7/10
Epoch 00007: val_acc did not improve
- 0s - loss: 0.1394 - acc: 0.4785 - val_loss: 0.1527 - val_acc: 0.512
Epoch 8/10
Epoch 00008: val_acc improved from 0.51250 to 0.52500, saving model to model.h5
- 0s - loss: 0.1375 - acc: 0.4629 - val_loss: 0.1502 - val_acc: 0.5250
Epoch 9/10
Epoch 00009: val_acc did not improve
- 0s - loss: 0.1361 - acc: 0.4551 - val_loss: 0.1484 - val_acc: 0.4688
Epoch 10/10
Epoch 00010: val_acc did not improve
- 0s - loss: 0.1355 - acc: 0.4648 - val_loss: 0.1473 - val_acc: 0.4750
acc: 47.50%
序列3:
Train on 480 samples, validate on 128 samples
Epoch 1/10
Epoch 00001: val_acc improved from -inf to 0.41406, saving model to model.h5
- 1s - loss: 0.1342 - acc: 0.3937 - val_loss: 0.1275 - val_acc: 0.4141
Epoch 2/10
Epoch 00002: val_acc did not improve
- 0s - loss: 0.1363 - acc: 0.4313 - val_loss: 0.1308 - val_acc: 0.3047
Epoch 3/10
Epoch 00003: val_acc improved from 0.41406 to 0.44531, saving model to model.h5
- 0s - loss: 0.1352 - acc: 0.4479 - val_loss: 0.1289 - val_acc: 0.4453
Epoch 4/10
Epoch 00004: val_acc did not improve
- 0s - loss: 0.1324 - acc: 0.4188 - val_loss: 0.1273 - val_acc: 0.3438
Epoch 5/10
Epoch 00005: val_acc did not improve
- 0s - loss: 0.1301 - acc: 0.4333 - val_loss: 0.1253 - val_acc: 0.4453
Epoch 6/10
Epoch 00006: val_acc did not improve
- 0s - loss: 0.1309 - acc: 0.4583 - val_loss: 0.1243 - val_acc: 0.4141
Epoch 7/10
Epoch 00007: val_acc did not improve
- 0s - loss: 0.1338 - acc: 0.4375 - val_loss: 0.1329 - val_acc: 0.4375
Epoch 8/10
Epoch 00008: val_acc did not improve
- 0s - loss: 0.1340 - acc: 0.4479 - val_loss: 0.1235 - val_acc: 0.3906
Epoch 9/10
Epoch 00009: val_acc did not improve
- 0s - loss: 0.1282 - acc: 0.4333 - val_loss: 0.1227 - val_acc: 0.3906
Epoch 10/10
Epoch 00010: val_acc did not improve
- 0s - loss: 0.1295 - acc: 0.4208 - val_loss: 0.1234 - val_acc: 0.2266
acc: 22.66%
应用程序执行跟踪的两个序列的小快照:
每个样本都有8个特征,前七个特征作为输入,最后一个被视为预测输出。
序列1:
40 626979.9375 1196586.8750 16452.5000 3275.4375 519773.6875 1.6600 20.5535
40 692134.0000 1288955.4375 17689.7500 3352.3125 521722.0000 4.5441 43.7865
40 735489.6250 1336525.0625 17956.4375 3355.8750 522180.3750 3.0677 29.3883
40 779080.3125 1380106.4375 18235.3125 3357.3125 522605.8125 2.3105 19.5920
40 822905.4375 1423345.5625 18507.9375 3360.0000 522896.2500 10.8020 69.7630
40 866268.5625 1466615.0000 18773.5625 3362.8750 523337.1875 3.0905 19.2260
.......
序列2:
40 582271.0625 1035435.8750 16294.5000 1256.5000 357175.3750 3.7675 34.1337
40 686667.4375 1193365.5000 18752.4375 1340.9375 361748.1250 3.8250 33.8135
40 735528.9375 1252983.3125 19288.8125 1354.3750 363153.9375 2.7997 25.0650
40 778706.5000 1295276.5625 19533.8125 1355.8125 363278.3750 3.6734 35.2727
40 822147.1250 1340507.5625 19808.3750 1357.1250 363673.7500 3.3200 39.5510
..... sesquence n: