使用Keras深度学习 - 没有培训的学习率

时间:2018-05-15 19:03:18

标签: python keras deep-learning

我使用技术指标作为模型的输入,使用Keras on Stock数据创建我的第一个模型,并注意到我几乎没有学习率 - 损失没有变化,准确性也没有变化。 由于我是DL和Keras的新手,我可能会忽略一些显而易见的事情,但在这里寻求帮助。

以下代码段和培训输出:

# Model definitions
model = Sequential()
model.add(Dense(1, input_dim=3))
model.add(Activation(activation='sigmoid'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
data = trainingsetdata.as_matrix()
labels = trainingsetlabel.as_matrix()
score = model.evaluate(data, labels, batch_size=32, verbose=1)
print(score)
model.fit(data, labels, batch_size=32, epochs=100, validation_split=0.05, verbose=2)
score = model.evaluate(data, labels, batch_size=32, verbose=1)
print(score)

[0.694263961315155, 0.4875]
Train on 380 samples, validate on 20 samples
Epoch 1/100
 - 0s - loss: 0.6939 - acc: 0.4605 - val_loss: 0.6900 - val_acc: 0.4000
Epoch 2/100
 - 0s - loss: 0.6934 - acc: 0.5079 - val_loss: 0.6882 - val_acc: 0.6000
Epoch 3/100
 - 0s - loss: 0.6932 - acc: 0.5211 - val_loss: 0.6867 - val_acc: 0.7000
Epoch 4/100
 - 0s - loss: 0.6929 - acc: 0.5289 - val_loss: 0.6858 - val_acc: 0.7000
Epoch 5/100
 - 0s - loss: 0.6929 - acc: 0.5237 - val_loss: 0.6850 - val_acc: 0.7000
Epoch 6/100
 - 0s - loss: 0.6928 - acc: 0.5263 - val_loss: 0.6841 - val_acc: 0.7000
Epoch 7/100
 - 0s - loss: 0.6927 - acc: 0.5184 - val_loss: 0.6836 - val_acc: 0.7000
Epoch 8/100
 - 0s - loss: 0.6926 - acc: 0.5184 - val_loss: 0.6828 - val_acc: 0.7000
Epoch 9/100
 - 0s - loss: 0.6925 - acc: 0.5105 - val_loss: 0.6823 - val_acc: 0.7000
Epoch 10/100
 - 0s - loss: 0.6925 - acc: 0.5079 - val_loss: 0.6816 - val_acc: 0.7000
Epoch 11/100
 - 0s - loss: 0.6923 - acc: 0.5132 - val_loss: 0.6808 - val_acc: 0.7000
Epoch 12/100
 - 0s - loss: 0.6923 - acc: 0.5105 - val_loss: 0.6800 - val_acc: 0.7000
Epoch 13/100
 - 0s - loss: 0.6923 - acc: 0.5105 - val_loss: 0.6793 - val_acc: 0.7000
Epoch 14/100
 - 0s - loss: 0.6922 - acc: 0.5105 - val_loss: 0.6789 - val_acc: 0.7000
Epoch 15/100
 - 0s - loss: 0.6922 - acc: 0.5105 - val_loss: 0.6783 - val_acc: 0.7000
Epoch 16/100
 - 0s - loss: 0.6921 - acc: 0.5132 - val_loss: 0.6780 - val_acc: 0.7000
Epoch 17/100
 - 0s - loss: 0.6922 - acc: 0.5132 - val_loss: 0.6774 - val_acc: 0.7000
Epoch 18/100
 - 0s - loss: 0.6921 - acc: 0.5132 - val_loss: 0.6771 - val_acc: 0.7000
Epoch 19/100
 - 0s - loss: 0.6920 - acc: 0.5132 - val_loss: 0.6768 - val_acc: 0.7000
Epoch 20/100
 - 0s - loss: 0.6922 - acc: 0.5132 - val_loss: 0.6768 - val_acc: 0.7000
Epoch 21/100
 - 0s - loss: 0.6921 - acc: 0.5132 - val_loss: 0.6766 - val_acc: 0.6500
Epoch 22/100
 - 0s - loss: 0.6921 - acc: 0.5132 - val_loss: 0.6764 - val_acc: 0.6500
Epoch 23/100
 - 0s - loss: 0.6920 - acc: 0.5132 - val_loss: 0.6762 - val_acc: 0.6500
Epoch 24/100
 - 0s - loss: 0.6922 - acc: 0.5132 - val_loss: 0.6762 - val_acc: 0.6500
Epoch 25/100
 - 0s - loss: 0.6920 - acc: 0.5132 - val_loss: 0.6761 - val_acc: 0.6500
Epoch 26/100
 - 0s - loss: 0.6921 - acc: 0.5132 - val_loss: 0.6759 - val_acc: 0.6500
Epoch 27/100
 - 0s - loss: 0.6921 - acc: 0.5132 - val_loss: 0.6758 - val_acc: 0.6500
Epoch 28/100
 - 0s - loss: 0.6921 - acc: 0.5132 - val_loss: 0.6757 - val_acc: 0.6500
Epoch 29/100
 - 0s - loss: 0.6920 - acc: 0.5132 - val_loss: 0.6757 - val_acc: 0.6500
Epoch 30/100
 - 0s - loss: 0.6920 - acc: 0.5158 - val_loss: 0.6758 - val_acc: 0.6500
Epoch 31/100
 - 0s - loss: 0.6921 - acc: 0.5132 - val_loss: 0.6756 - val_acc: 0.6500
Epoch 32/100
 - 0s - loss: 0.6922 - acc: 0.5132 - val_loss: 0.6757 - val_acc: 0.6500
Epoch 33/100
 - 0s - loss: 0.6920 - acc: 0.5132 - val_loss: 0.6756 - val_acc: 0.6500
Epoch 34/100
 - 0s - loss: 0.6920 - acc: 0.5132 - val_loss: 0.6755 - val_acc: 0.6500
Epoch 35/100
 - 0s - loss: 0.6921 - acc: 0.5132 - val_loss: 0.6757 - val_acc: 0.6500
Epoch 36/100
 - 0s - loss: 0.6920 - acc: 0.5132 - val_loss: 0.6755 - val_acc: 0.6500
Epoch 37/100
 - 0s - loss: 0.6920 - acc: 0.5132 - val_loss: 0.6754 - val_acc: 0.6500
Epoch 38/100
 - 0s - loss: 0.6921 - acc: 0.5158 - val_loss: 0.6752 - val_acc: 0.6500
Epoch 39/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6750 - val_acc: 0.6500
Epoch 40/100
 - 0s - loss: 0.6920 - acc: 0.5158 - val_loss: 0.6749 - val_acc: 0.6500
Epoch 41/100
 - 0s - loss: 0.6921 - acc: 0.5132 - val_loss: 0.6749 - val_acc: 0.6500
Epoch 42/100
 - 0s - loss: 0.6921 - acc: 0.5237 - val_loss: 0.6749 - val_acc: 0.6500
Epoch 43/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6749 - val_acc: 0.6500
Epoch 44/100
 - 0s - loss: 0.6920 - acc: 0.5158 - val_loss: 0.6748 - val_acc: 0.6500
Epoch 45/100
 - 0s - loss: 0.6920 - acc: 0.5158 - val_loss: 0.6749 - val_acc: 0.6500
Epoch 46/100
 - 0s - loss: 0.6921 - acc: 0.5263 - val_loss: 0.6746 - val_acc: 0.6500
Epoch 47/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6746 - val_acc: 0.6500
Epoch 48/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6745 - val_acc: 0.6500
Epoch 49/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6744 - val_acc: 0.6500
Epoch 50/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6744 - val_acc: 0.6500
Epoch 51/100
 - 0s - loss: 0.6921 - acc: 0.5132 - val_loss: 0.6747 - val_acc: 0.6500
Epoch 52/100
 - 0s - loss: 0.6920 - acc: 0.5132 - val_loss: 0.6745 - val_acc: 0.6500
Epoch 53/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6746 - val_acc: 0.6500
Epoch 54/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6747 - val_acc: 0.6500
Epoch 55/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6747 - val_acc: 0.6500
Epoch 56/100
 - 0s - loss: 0.6921 - acc: 0.5158 - val_loss: 0.6747 - val_acc: 0.6500
Epoch 57/100
 - 0s - loss: 0.6921 - acc: 0.5237 - val_loss: 0.6745 - val_acc: 0.6500
Epoch 58/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6743 - val_acc: 0.6500
Epoch 59/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6743 - val_acc: 0.6500
Epoch 60/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 61/100
 - 0s - loss: 0.6920 - acc: 0.5237 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 62/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 63/100
 - 0s - loss: 0.6920 - acc: 0.5237 - val_loss: 0.6741 - val_acc: 0.6500
Epoch 64/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6741 - val_acc: 0.6500
Epoch 65/100
 - 0s - loss: 0.6920 - acc: 0.5237 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 66/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6741 - val_acc: 0.6500
Epoch 67/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6741 - val_acc: 0.6500
Epoch 68/100
 - 0s - loss: 0.6921 - acc: 0.5184 - val_loss: 0.6742 - val_acc: 0.6500

Epoch 69/100
 - 0s - loss: 0.6921 - acc: 0.5158 - val_loss: 0.6743 - val_acc: 0.6500
Epoch 70/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6743 - val_acc: 0.6500
Epoch 71/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 72/100
 - 0s - loss: 0.6921 - acc: 0.5184 - val_loss: 0.6741 - val_acc: 0.6500
Epoch 73/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6739 - val_acc: 0.6500
Epoch 74/100
 - 0s - loss: 0.6920 - acc: 0.5237 - val_loss: 0.6741 - val_acc: 0.6500
Epoch 75/100
 - 0s - loss: 0.6920 - acc: 0.5237 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 76/100
 - 0s - loss: 0.6920 - acc: 0.5132 - val_loss: 0.6743 - val_acc: 0.6500
Epoch 77/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 78/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 79/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 80/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 81/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6743 - val_acc: 0.6500
Epoch 82/100
 - 0s - loss: 0.6921 - acc: 0.5184 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 83/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 84/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6743 - val_acc: 0.6500
Epoch 85/100
 - 0s - loss: 0.6920 - acc: 0.5158 - val_loss: 0.6743 - val_acc: 0.6500
Epoch 86/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6743 - val_acc: 0.6500
Epoch 87/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 88/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 89/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6743 - val_acc: 0.6500
Epoch 90/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 91/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6741 - val_acc: 0.6500
Epoch 92/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6739 - val_acc: 0.6500
Epoch 93/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6740 - val_acc: 0.6500
Epoch 94/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6741 - val_acc: 0.6500
Epoch 95/100
 - 0s - loss: 0.6920 - acc: 0.5237 - val_loss: 0.6740 - val_acc: 0.6500
Epoch 96/100
 - 0s - loss: 0.6921 - acc: 0.5263 - val_loss: 0.6738 - val_acc: 0.6500
Epoch 97/100
 - 0s - loss: 0.6920 - acc: 0.5263 - val_loss: 0.6739 - val_acc: 0.6500
Epoch 98/100
 - 0s - loss: 0.6920 - acc: 0.5237 - val_loss: 0.6741 - val_acc: 0.6500
Epoch 99/100
 - 0s - loss: 0.6920 - acc: 0.5237 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 100/100
 - 0s - loss: 0.6920 - acc: 0.5132 - val_loss: 0.6741 - val_acc: 0.6500
400/400 [==============================] - 0s 63us/step
[0.6910193943977356, 0.5275]

2 个答案:

答案 0 :(得分:0)

当您定义优化程序时,那就是您需要提供所选学习速率的优化程序,您可以按如下方式执行此操作:

model.compile(RMSprop(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])

你可以举几个例子here

答案 1 :(得分:0)

我绝对不确定你的模型中缺乏学习的原因是什么,可能有很多原因。对您手头的问题的解释将有助于并且可能是了解该模型为什么不学习的关键。我将对输入数据和机器学习理论做一些初步的猜测:

输入数据

  1. 输入数据是否按照所有功能正确标准化?如果未对特征进行规范化(或标准化),则一个特征可能具有更大范围的值并遮蔽其余特征,因此该模型仅考虑一个特征。
  2. ML理论

    1. 我会尝试使用至少具有非线性激活函数的2层模型:其背后的原因是如果模型没有,则模型将是线性的,至少有2层非线性激活(sigmoid是一种非线性激活)。

    2. 每当您遇到新问题并尝试使用NN首次解决问题时,我建议您使用Adam优化器。亚当就像"完全自动化的版本"所有优化者。在某些情况下,它可能不是最佳选择,但根据我的经验,它是最好的首选优化器,因为它可以很好地开箱即用。

    3. 希望这会引导你一点点,并帮助让模特学到一些东西。