我正在使用 Tensorflow 2 来训练神经网络,并希望损失函数是这样的:
与
其中C是一个任意常数,x是网络的输入,y是网络的输出
我已经构建了神经网络的结构,但我不知道如何绕过构建损失函数和使其最小化的训练过程。
这是我目前得到的
def MyLossFunction(y, x):
with tf.GradientTape(persistent=True) as tape:
tape.watch(x)
y_x = tape.gradient(y, x)
return tf.reduce_mean(tf.square(y_x - x))
# Properties of the problem
a = -2 #start of interval
b = 2 #end of interval
N = 100 #number of points inside the interval
x = np.arange(a, b, (b-a)/N).reshape((N, 1))
y = np.zeros(N)
C = 0 #arbitrary constant
# Properties of the Neural Network
N_INPUT = 1 #number of neurons in input layer
N_HIDDEN = 32 #number of neurons in hidden layer
N_OUTPUT = 1 #number of neurons in output layer
# Training Variables
# LearningRate = 0.003
N_EPOCHS = 16
model = keras.Sequential([
keras.layers.Dense(N_HIDDEN, activation='sigmoid'),
keras.layers.Dense(N_OUTPUT)
])
model.compile(optimizer='adam',
loss=MyLossFunction())