我正在使用TensorFlow eager模式为MNIST数据集创建基本的自动编码器。我想观察我的损失函数相对于网络训练参数的二阶偏导数。当前,在tape.gradient()
的输出上调用in_tape.gradient
返回None
(其中in_tape
是嵌套在称为磁带的外部GradientTape
中的GradientTape
在下面包含了我的代码)
我尝试直接在tape.gradient()
上调用in_tape.gradient()
,但未返回任何内容。我的下一个方法是遍历in_tape.gradient()
的输出,并将tape.gradient()
分别应用于每个梯度(相对于我的模型变量),每次返回None
。
我收到的任何None
调用都只有一个tape.gradient()
值,而不是一个我认为会为单个偏导数指示None
的None值的列表,这在某些情况下是可以预期的案例。
我目前仅尝试获取第一组权重的二阶导数(从输入层到隐藏层),但是,一旦我能够进行这项工作,我将对其进行缩放以包括所有权重。
tf.enable_eager_execution()
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = train_images.reshape((train_images.shape[0], train_images.shape[1]*train_images.shape[2])).astype(np.float32)/255
test_images = test_images.reshape((test_images.shape[0], test_images.shape[1]*test_images.shape[2])).astype(np.float32)/255
num_epochs = 200
batch_size = 100
learning_rate = 0.0003
class MNISTModel(tf.keras.Model):
def __init__(self, device='/gpu:0'):
super(MNISTModel, self).__init__()
self.device = device
self.initializer = tf.initializers.random_uniform(0.0, 0.5)
self.hidden = tf.keras.layers.Dense(200, use_bias=False, kernel_initializer=tf.initializers.random_uniform(0.0, 0.5), name="Hidden")
self.out = tf.keras.layers.Dense(train_images.shape[1], use_bias=False, kernel_initializer=tf.initializers.random_uniform(0.0, 0.5), name="Output")
self.hidden.build(train_images.shape[1])
self.out.build(200)
def call(self, x):
return self.out(self.hidden(x))
def loss_func(model, x, y_):
return tf.reduce_mean(tf.losses.mean_squared_error(labels=y_, predictions=model(x)))
#return tf.reduce_mean((y_ - model(x))**4)
model = MNISTModel()
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
for epochs in range(num_epochs):
print("Started epoch ", epochs)
print("Num batches is: ", train_images.shape[0]/batch_size)
for i in range(0,1): #(int(train_images.shape[0]/batch_size)):
with tfe.GradientTape(persistent=True) as tape:
tape.watch(model.variables)
with tfe.GradientTape() as in_tape:
in_tape.watch(model.variables)
loss = loss_func(model,train_images[0:batch_size],train_images[0:batch_size])
grads = tape.gradient(loss, model.variables)
IH_partial_grads = np.array([])
for i in range(len(grads[0])):
collector = np.array([])
for j in range(len(grads[0][i])):
collector = np.append(collector, tape.gradient(grads[0][i][j], model.variables[0]))
IH_partial_grads = np.append(IH_partial_grads, collector)
optimizer.apply_gradients(zip(grads, model.variables), global_step=tf.train.get_or_create_global_step())
print("Epoch test loss: ", loss_func(model, test_images, test_images))
我的最终目标是针对网络的所有参数形成损失函数的黑森州矩阵。
感谢所有帮助!