在非紧急模式下,我可以毫无问题地运行它:
s = tf.complex(tf.Variable(1.0), tf.Variable(1.0))
train_op = tf.train.AdamOptimizer(0.01).minimize(tf.abs(s))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(5):
_, s_ = sess.run([train_op, s])
print(s_)
>(1+1j)
(0.99+0.99j)
(0.98+0.98j)
(0.9700001+0.9700001j)
(0.9600001+0.9600001j)
但是我似乎无法在渴望的模式下找到等效的表达方式。我尝试了以下方法,但TF抱怨:
tfe = tf.contrib.eager
s = tf.complex(tfe.Variable(1.0), tfe.Variable(1.0))
def obj(s):
return tf.abs(s)
with tf.GradientTape() as tape:
loss = obj(s)
grads = tape.gradient(loss, [s])
optimizer.apply_gradients(zip(grads, [s]))
调用GradientTape.gradient时,源张量的dtype必须为浮点型(例如
tf.float32
),得到tf.complex64
和
没有为任何变量提供渐变:
['tf.Tensor((1+1j), shape=(), dtype=complex64)']
如何在急切模式下训练复杂变量?
答案 0 :(得分:0)
在Tensorflow 2中使用渴望模式,您可以将实部和虚部作为实变量:
r, i = tf.Variable(1.0), tf.Variable(1.0)
def obj(s):
return tf.abs(s)
with tf.GradientTape() as tape:
s = tf.complex(r, i)
loss = obj(s)
grads = tape.gradient(loss, [r, i])
optimizer.apply_gradients(zip(grads, [r, i]))