仅供参考:我上传了您自己测试的所有内容(数据+简化脚本)。
这是我的问题: 我试图训练一个使用四个输入值的非常简单的模型 x(0),x(1),x(2),x(3) 预测值x(4),即y = x(4)。
然而,我修改了数据,使得y = x(4)是一个完美的线性外推: y = x(3)+(x(3)-x(2))
我使用的模型是一个有四个神经元的单个密集层。权重“0 0 -1 2”将是一个完美的解决方案(丢失“0”)。
但是,我无法达到这些价值。
你能帮忙或告诉我,为什么?
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Flatten, Dense
from keras.optimizers import Adadelta, Adam
import keras.backend as K
def root_mean_squared_error(y_true, y_pred):
return K.sqrt( K.mean( K.square( y_pred - y_true ) ) )
X_train = np.random.random(240000*4)
X_train = np.reshape( X_train, ( 240000, 1, 4 ) )
# predict the gradient of the
y_train = X_train[:,0,3] - X_train[:,0,2]
inputShape = ( X_train.shape[1], X_train.shape[2] )
# create model
model = Sequential()
model.add( Flatten( input_shape=inputShape ) )
model.add( Dense( 1 ) )
model.compile( loss=root_mean_squared_error, optimizer=Adam( decay = 0.1 ) )
# train model
batchSize = 8
model.fit( X_train, y_train, nb_epoch=10, batch_size=batchSize, shuffle=True )
y_train_predicted = model.predict( X_train)
y_train_predicted = np.asarray(y_train_predicted).ravel()
y_train_predicted_rmse = np.sqrt( np.mean( np.square( y_train_predicted - y_train ) ) )
print( "y_train RMSE = " + str( y_train_predicted_rmse ) )
答案 0 :(得分:1)
我问自己的第一件事是什么?#34;显而易见"模型不会收敛,如果超灵敏是合适的。
我调整了你的代码以修复学习率。我删除了衰变并添加了0.01而不是0.001的学习率,这是默认值(参见https://keras.io/optimizers/)。在一个时期之后得到的权重是
[ 9.3402149e-04],
[ 5.8139337e-04],
[-9.9929601e-01],
[ 1.0009530e+00]
大致是我们在代码中设置的内容。
[0, 0, -1, 1]
如果您保持默认学习率(0.001)而不衰减,它也能正常工作。 找到下面的工作代码。
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Flatten, Dense
from keras.optimizers import Adadelta, Adam
import keras.backend as K
def root_mean_squared_error(y_true, y_pred):
return K.sqrt( K.mean( K.square( y_pred - y_true ) ) )
X_train = np.random.random(240000*4)
X_train = np.reshape( X_train, ( 240000, 1, 4 ) )
y_train = X_train[:,0,3] - X_train[:,0,2]
inputShape = ( X_train.shape[1], X_train.shape[2] )
# create model
model = Sequential()
model.add( Flatten( input_shape=inputShape ) )
model.add( Dense( 1 ) )
model.compile( loss=root_mean_squared_error, optimizer=Adam( lr=0.01 ) )
# train model
batchSize = 8
model.fit( X_train, y_train, nb_epoch=1, batch_size=batchSize, shuffle=True )
y_train_predicted = model.predict( X_train)
y_train_predicted = np.asarray(y_train_predicted).ravel()
y_train_predicted_rmse = np.sqrt( np.mean( np.square( y_train_predicted - y_train ) ) )
print( "y_train RMSE = " + str( y_train_predicted_rmse ) )
x = [model.layers]
x[0][1].get_weights()