我正在尝试在Tensorflow上实现一个简单的LSTM单元,以将其性能与我之前实现的另一单元进行比较。
x = tf.placeholder(tf.float32,[BATCH_SIZE,SEQ_LENGTH,FEATURE_SIZE])
y = tf.placeholder(tf.float32,[BATCH_SIZE,SEQ_LENGTH,FEATURE_SIZE])
weights = { 'out': tf.Variable(tf.random_normal([FEATURE_SIZE, 8 * FEATURE_SIZE, NUM_LAYERS]))}
biases = { 'out': tf.Variable(tf.random_normal([4 * FEATURE_SIZE, NUM_LAYERS]))}
def RNN(x, weights, biases):
x = tf.unstack(x, SEQ_LENGTH, 1)
lstm_cell = tf.keras.layers.LSTMCell(NUM_LAYERS)
outputs = tf.keras.layers.RNN(lstm_cell, x, dtype=tf.float32)
return outputs
pred = RNN(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
我使用了一个在GitHub上找到的示例,并尝试对其进行更改以实现所需的行为,但出现了此错误消息:
TypeError: Failed to convert object of type <class 'tensorflow.python.keras.layers.recurrent.RNN'> to Tensor. Contents: <tensorflow.python.keras.layers.recurrent.RNN object at 0x7fe437248710>. Consider casting elements to a supported type.
答案 0 :(得分:0)
尝试
outputs = tf.keras.layers.RNN(lstm_cell, dtype=tf.float32) (x)
代替
以下是TF documentation中的示例:
# Let's use this cell in a RNN layer:
cell = MinimalRNNCell(32)
x = keras.Input((None, 5))
layer = RNN(cell)
y = layer(x)
# Here's how to use the cell to build a stacked RNN:
cells = [MinimalRNNCell(32), MinimalRNNCell(64)]
x = keras.Input((None, 5))
layer = RNN(cells)
y = layer(x)