我正在尝试从本文中重新创建结果:http://papers.nips.cc/paper/9617-deep-leakage-from-gradients.pdf
其要点是它试图使用模型权重和梯度来重新创建输入和输出训练数据(这对于分布式学习很重要)。这是他们的设置:
我在Tensorflow中努力实现的部分是步骤4和5。我不确定如何正确处理二阶导数。
这是我天真的做法:
# x_batch_train is the input data
# y_batch_train is the output data
# fakex is the fake input data
# fakey is the fake output data
fakex = tf.convert_to_tensor(np.random.random(x_batch_train.numpy().shape))
fakey = tf.convert_to_tensor(np.random.random(y_batch_train.numpy().shape))
with tf.GradientTape(persistent=True) as tape:
logits = model(x_batch_train, training=True)
# The normal loss using real data
loss_value = loss_fn(y_batch_train, logits)
fakeLogits = model(fakex, training=True)
# The fake loss using fake data
fake_loss_value = loss_fn(fakey, fakeLogits)
# The normal gradient
grads = tape.gradient(loss_value, model.trainable_weights)
# The fake gradient
fakegrads = tape.gradient(fake_loss_value, model.trainable_weights)
# The MSE of the normal gradient and the fake gradient
inputloss = tf.keras.losses.mean_squared_error(grads, fakegrads)
# The gradient wrt to the fake input data
inputGrad = tape.gradient(inputloss, fakex)
# The gradient wrt to the fake outputdata
outputGrad = tape.gradient(inputloss, fakey)
print(inputGrad)
print(outputGrad)
到目前为止,我尝试过的所有操作都导致输入grad和outputgrad的错误或返回None。如何使用tape.gradient进行这些二阶导数?假设可以/应该用tape.gradient完成。
谢谢!