在tensorflow
版本2.0.0-beta1
中,我试图实现一个keras
层,该层具有从正态随机分布中采样的权重。我希望将分布的平均值作为可训练的参数。
由于在tensorflow-probability
中已经实现了“重新参数化技巧”,原则上,如果我没有记错的话,应该可以计算相对于分布平均值的梯度。
但是,当我尝试使用tf.GradientTape()
计算网络输出相对于平均值变量的梯度时,返回的梯度为None
。
我创建了两个最小的示例,一个是具有确定性权重的层,另一个是具有随机权重的层。确定性层的梯度的梯度按预期计算,但是在随机层的情况下,梯度为None
。没有错误消息详细说明为什么渐变为None
,而且我有点受阻。
最小示例代码:
A:这是确定性网络的最小示例:
import tensorflow as tf; print(tf.__version__)
from tensorflow.keras import backend as K
from tensorflow.keras.layers import Layer,Input
from tensorflow.keras.models import Model
from tensorflow.keras.initializers import RandomNormal
import tensorflow_probability as tfp
import numpy as np
# example data
x_data = np.random.rand(99,3).astype(np.float32)
# # A: DETERMINISTIC MODEL
# 1 Define Layer
class deterministic_test_layer(Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(deterministic_test_layer, self).__init__(**kwargs)
def build(self, input_shape):
self.kernel = self.add_weight(name='kernel',
shape=(input_shape[1], self.output_dim),
initializer='uniform',
trainable=True)
super(deterministic_test_layer, self).build(input_shape)
def call(self, x):
return K.dot(x, self.kernel)
def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim)
# 2 Create model and calculate gradient
x = Input(shape=(3,))
fx = deterministic_test_layer(1)(x)
deterministic_test_model = Model(name='test_deterministic',inputs=[x], outputs=[fx])
print('\n\n\nCalculating gradients for deterministic model: ')
for x_now in np.split(x_data,3):
# print(x_now.shape)
with tf.GradientTape() as tape:
fx_now = deterministic_test_model(x_now)
grads = tape.gradient(
fx_now,
deterministic_test_model.trainable_variables,
)
print('\n',grads,'\n')
print(deterministic_test_model.summary())
B:下面的示例非常相似,但是我没有使用确定性的权重,而是尝试对测试层使用随机采样的权重(在call()
时间随机采样!)
# # B: RANDOM MODEL
# 1 Define Layer
class random_test_layer(Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(random_test_layer, self).__init__(**kwargs)
def build(self, input_shape):
self.mean_W = self.add_weight('mean_W',
initializer=RandomNormal(mean=0.5,stddev=0.1),
trainable=True)
self.kernel_dist = tfp.distributions.MultivariateNormalDiag(loc=self.mean_W,scale_diag=(1.,))
super(random_test_layer, self).build(input_shape)
def call(self, x):
sampled_kernel = self.kernel_dist.sample(sample_shape=x.shape[1])
return K.dot(x, sampled_kernel)
def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim)
# 2 Create model and calculate gradient
x = Input(shape=(3,))
fx = random_test_layer(1)(x)
random_test_model = Model(name='test_random',inputs=[x], outputs=[fx])
print('\n\n\nCalculating gradients for random model: ')
for x_now in np.split(x_data,3):
# print(x_now.shape)
with tf.GradientTape() as tape:
fx_now = random_test_model(x_now)
grads = tape.gradient(
fx_now,
random_test_model.trainable_variables,
)
print('\n',grads,'\n')
print(random_test_model.summary())
预期/实际输出:
A:确定性网络按预期工作,并且计算了梯度。输出为:
2.0.0-beta1
Calculating gradients for deterministic model:
[<tf.Tensor: id=26, shape=(3, 1), dtype=float32, numpy=
array([[17.79845 ],
[15.764006 ],
[14.4183035]], dtype=float32)>]
[<tf.Tensor: id=34, shape=(3, 1), dtype=float32, numpy=
array([[16.22232 ],
[17.09122 ],
[16.195663]], dtype=float32)>]
[<tf.Tensor: id=42, shape=(3, 1), dtype=float32, numpy=
array([[16.382954],
[16.074356],
[17.718027]], dtype=float32)>]
Model: "test_deterministic"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 3)] 0
_________________________________________________________________
deterministic_test_layer (de (None, 1) 3
=================================================================
Total params: 3
Trainable params: 3
Non-trainable params: 0
_________________________________________________________________
None
B:但是,在类似的随机网络的情况下,未按预期方式计算梯度(使用重新参数化技巧)。相反,它们是None
。完整的输出是
Calculating gradients for random model:
[None]
[None]
[None]
Model: "test_random"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 3)] 0
_________________________________________________________________
random_test_layer (random_te (None, 1) 1
=================================================================
Total params: 1
Trainable params: 1
Non-trainable params: 0
_________________________________________________________________
None
有人可以在这里指出我的问题吗?
答案 0 :(得分:1)
似乎tfp.distributions.MultivariateNormalDiag的输入参数(例如loc
)不可区分。在这种情况下,以下内容是等效的:
class random_test_layer(Layer):
...
def build(self, input_shape):
...
self.kernel_dist = tfp.distributions.MultivariateNormalDiag(loc=0, scale_diag=(1.,))
super(random_test_layer, self).build(input_shape)
def call(self, x):
sampled_kernel = self.kernel_dist.sample(sample_shape=x.shape[1]) + self.mean_W
return K.dot(x, sampled_kernel)
但是,在这种情况下,损失与self.mean_W
有关。
请注意::尽管此方法可能对您有用,但请注意,由于我们将self.kernel_dist.prob
排除在外,因此调用密度函数loc
会产生不同的结果。>