我得到了以下(伪)代码from doc:
words_in_dataset = tf.placeholder(tf.float32, [time_steps, batch_size, num_features])
...
for current_batch_of_words in words_in_dataset:
但是我们如何迭代占位符呢?
答案 0 :(得分:0)
除非你eager execution基本上对动态进行评估,否则你不能这样做。在你的情况下,这只是伪代码,虽然是一个误导性的代码,只是暗示了算法如何超过给定单词数据集中的批次。
答案 1 :(得分:0)
占位符仅适用于会话图模式。它们在Eager模式下不可用,没有意义。占位符用于将张量提供给图形,因为您没有构建图形,首先不需要占位符。
就占位符的迭代和伪代码而言,伪代码只显示我们将实现的算法。
要迭代占位符,您将执行以下操作:
import tensorflow as tf
X = tf.placeholder(dtype=tf.float32, shape=[5])
with tf.Session() as sess:
for i in range(5):
print(sess.run(X[i], feed_dict={X : [1,2,3,4,5]}))
然后输出将是:
1.0
2.0
3.0
4.0
5.0
切片规则与numpy数组相同。
答案 2 :(得分:0)
由于必须在time steps
中拆分每个批次,您可以执行以下操作:
for current_batch_of_words in tf.unstack(words_in_dataset,axis=0):
示例代码,
# Set LSTM params
time_steps = 3
num_features = 5
batch_size = 2
#Input placeholder
words_in_dataset = tf.placeholder(tf.float32, [time_steps, batch_size, num_features])
lstm= tf.contrib.rnn.BasicLSTMCell(num_units=10)
# Initial state of the LSTM memory.
hidden_state, current_state = basic_cell.zero_state(batch_size, dtype=tf.float32)
state = hidden_state, current_state
# Create a loop of N LSTM cells, N = time_steps.
outputs = []
for current_batch_of_words in tf.unstack(words_in_dataset,axis=0):
# The value of state is updated after processing each batch of words.
output, state= lstm(current_batch_of_words, state)
outputs.append(output)