是否可以使用tensorflow的tpu_estimator()训练生成模型(即具有自定义损耗计算的变分自动编码器)?
我的VAE的简化版本:
def model_fn(features, labels, mode, params):
#Encoder layers
x = layers.Input()
h = conv1D()(x)
#BOTTLENECK LAYER
z_mean = Dense()(h)
z_log_var = Dense()(h)
def sampling(args):
z_mean_, z_log_var_ = args
epsilon = tf.random_normal()
return z_mean_ + tf.exp(z_log_var_/2)*epsilon
z = Lambda(sampling, name='lambda')([z_mean, z_log_var])
#Decoder Layers
h = Dense(z)
x_decoded = TimeDistributed(Dense(activation='softmax'))(h)
#VAE
vae = tf.keras.models.Model(x, x_decoded)
#VAE LOSS
def vae_loss(x,x_decoded_mean):
x = flatten(x)
x_decoded_mean = flatten(x_decoded_mean)
xent_loss = binary_crossentropy(x, x_decoded_mean)
kl_loss = mean(1 + z_log_var - square(z_mean) - exp(z_log_var))
return xent_loss + kl_loss
optimizer = tf.train.AdamOptimizer()
optimizer = tpu_optimizer.CrossShardOptimizer(optimizer)
train_op = optimizer.minimize(vae_loss, global_step=tf.train.get_global_step())
return tpu_estimator.TPUEstimatorSpec(mode=mode, loss=vae_loss, train_op=train_op)
TPU配置初始化,并使用我的input_fn正确加载了数据集,但是出现以下错误,该错误是由自定义损失函数触发的:
VAE_LOSS() error:
File "TPUest.py", line 107, in model_fn
train_op = optimizer.minimize(vae_loss, global_step=tf.train.get_global_step())
File "/usr/local/lib/python2.7/dist- packages/tensorflow/python/training/optimizer.py", line 414, in minimize grad_loss=grad_loss)
File "/usr/local/lib/python2.7/distpackages/tensorflow/contrib/tpu/python/tpu/tpu_optimizer.py", line 84, in compute_gradients
loss *= scale
TypeError: unsupported operand type(s) for *=: 'function' and 'float'
答案 0 :(得分:1)
对optimizer.minimize的调用需要有一个 Tensor 损失,但是您传递的是Python函数(具有适当输入的结果将评估为您想要的)。参见https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer#minimize
您需要做的是在上面的代码中显式构造vae_loss张量。在执行期间,数据将从您的输入层传播到此损耗计算。