我正在编写一个RBF神经网络,并让它预测简单ODE的解决方案。当ODE的解是线性的(例如$$ \ frac {dy} {dx} = 1)$$时,模型将准确地预测$$ y = x + c $$。但是,当我将模型的损失定义为$$ \ frac {d ^ 2y} {dx ^ 2} = 1 $$时,损失不会低于1。
我试图改变学习率,但这没有效果。
这有效:
x = np.arange(0.25,0.75,0.01)
labels = np.ones(50)
def gradient( y, x , give_name):
return Lambda( lambda z: K.gradients( z[0], z[1] ), output_shape = [1], name = give_name)( [ y, x ] )
x1= Input(shape=(1,))
rbflayer = RBFLayer(10, betas=1, input_shape=(1,))(x1)
y = Dense(1, activation='sigmoid', kernel_initializer='ones' )(rbflayer)
g1 = gradient( y, x1 , "dudx1")
model = Model(inputs=x1,outputs=[y,g1,g11,loss])
losses = {
"dudx1": "mean_squared_error",
}
model.compile(loss=losses, optimizer='adam')
model.fit(x,labels,epochs=1000,verbose=1)
plt.plot(x,model.predict(x)[0])
这不是:
def gradient( y, x , give_name):
return Lambda( lambda z: K.gradients( z[0], z[1] ), output_shape = [1], name = give_name)( [ y, x ] )
x1= Input(shape=(1,))
rbflayer = RBFLayer(10, betas=1, input_shape=(1,))(x1)
y = Dense(1, activation='sigmoid', kernel_initializer='ones' )(rbflayer)
g1 = gradient( y, x1 , "dudx1")
g11 = gradient( g1,x1,"dudxx1")
model = Model(inputs=x1,outputs=[y,g1,g11,loss])
losses = {
"dudxx1": "mean_squared_error",
}
model.compile(loss=losses, optimizer='adam')
如果您看到第二个示例为何无法融合的话,我欢迎任何有建设性的反馈!