我一直在尝试在TensorFlow中实现与复杂参数/权重相关的梯度下降,但是我收到以下错误消息:
Traceback (most recent call last):
File "/home/reg/complex_gradients.py", line 15, in <module>
gate_gradients=optimizer.GATE_NONE)
File "/usr/local/python3/lib/python3.5/site-packages/tensorflow/python/training/optimizer.py", line 196, in minimize
grad_loss=grad_loss)
File "/usr/local/python3/lib/python3.5/site-packages/tensorflow/python/training/optimizer.py", line 257, in compute_gradients
self._assert_valid_dtypes([v for g, v in grads_and_vars if g is not None])
File "/usr/local/python3/lib/python3.5/site-packages/tensorflow/python/training/optimizer.py", line 379, in _assert_valid_dtypes
dtype, t.name, [v for v in valid_dtypes]))
ValueError: Invalid type tf.complex64 for w:0, expected: [tf.float32, tf.float64, tf.float16].
我将我的代码归结为此以重新创建错误:
import tensorflow as tf
x = tf.placeholder(dtype=tf.complex64)
y = tf.placeholder(dtype=tf.float32)
initial_values = tf.complex(real=tf.random_uniform([100, 100]), imag=tf.random_uniform([100, 100]))
weigths = tf.Variable(initial_values, name='w', dtype=tf.complex64)
loss = tf.nn.l2_loss(tf.complex_abs(tf.matmul(x, weigths)) - y, name='loss')
optimizer = tf.train.GradientDescentOptimizer(0.001)
train = optimizer.minimize(loss,
global_step=0.001,
gate_gradients=optimizer.GATE_NONE)
我错过了什么? Tensorflow不支持复数吗?有什么建议让它工作吗?感谢任何帮助。