简单的keras nn并不能很好地预测

时间:2017-08-24 21:16:50

标签: neural-network keras

代码:

x1 = np.array([1, 10])
x2 = np.array([7, 4])
x3 = np.array([8, 7])
x4 = np.array([1, 15])
x5 = np.array([4, 4])
X = np.array([x1, x2, x3, x4, x5])
X = X / 100
Y = np.array([4, 8, 7, 5, 1])
Y = Y / 100
model = Sequential()
model.add(Dense(4, input_dim=2, activation='sigmoid', kernel_initializer="uniform"))
model.add(Dense(2, activation='sigmoid', kernel_initializer="uniform"))
model.add(Dense(1, activation='sigmoid', kernel_initializer="uniform"))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X, Y, epochs=500, batch_size=3)
toPred = np.array([x1]) / 100
print(model.predict(toPred) * 100)

对于我预测的一切,我得到一个奇怪的结果,所有预测都几乎相同,并且不接近实际值。

建议?

1 个答案:

答案 0 :(得分:1)

尝试使用此示例,我没有改变很多只是缩放的方法和更长的培训时间。

import numpy as np
from keras.models import Sequential
from keras.layers import Dense


x1 = np.array([1, 10])
x2 = np.array([7, 4])
x3 = np.array([8, 7])
x4 = np.array([1, 15])
x5 = np.array([4, 4])
X = np.array([x1, x2, x3, x4, x5])

# Scale to range 0-1 since input activation is a sigmoid
X = (X - X.std()) / X.mean()

#Dont need to scale Y, leaves us with one less unnecessary operation
Y = np.array([4, 8, 7, 5, 1])

model = Sequential()
model.add(Dense(4, input_dim=2, activation='sigmoid', kernel_initializer="uniform"))
model.add(Dense(2, activation='sigmoid', kernel_initializer="uniform"))

#Set output activation to linear
model.add(Dense(1, activation='linear', kernel_initializer="uniform"))
model.compile(loss='mean_squared_error', optimizer='adam')

#Train for 5k epochs, since the loss keeps decreasing
model.fit(X, Y, epochs=5000, batch_size=5)

print(model.predict(X))

给了我

[[ 3.50988507]
 [ 7.0278182 ]
 [ 7.61787605]
 [ 5.38016272]
 [ 1.63140726]]

有时您只需要修改超参数。你可以消除第二个密集层,因为这个数据很小,我使用'SGD'(随机梯度下降)优化器也可以得到更好的结果。通过提高学习速度,您也可以更快地获得良好的结果(可能仅适用于此片段)。所以,只要到处寻找你想要的结果。希望这会有所帮助:)

from keras.optimizers import SGD
opt = SGD(lr=.05)
model.compile(loss='mean_squared_error', optimizer=opt)
model.fit(X, Y, epochs=1000, batch_size=5)