我在TensorFlow中构建了一个图表,它分为两部分:
这里没问题。
我想通过让第二部分使用前一个窗口的输出来计算其输出来将此图转换为递归图,如下所示: 第2部分取[a,b,c]和默认x0产生x1,然后[[b,c,d],x1]输出x2,然后[[c,d,e],x2]输出x3和等等。
我如何实现这一目标?
答案 0 :(得分:0)
如果您将每个3字母数组视为输入步骤,即:
step 1: [abc]
step 2: [bcd]
step 3: [cde]
隐藏状态将在每个时间步长传播,隐藏状态与输出相同,因此您无需担心。
import tensorflow as tf
import numpy as np
sess = tf.InteractiveSession()
def lstm_cell(hidden_size):
return tf.contrib.rnn.BasicLSTMCell(num_units = hidden_size)
in_seqlen = 3
input_dim = 3
x = tf.placeholder("float", [None, in_seqlen, input_dim])
out, state = tf.nn.dynamic_rnn(lstm_cell(input_dim), x, dtype=tf.float32)
...
sess.run(tf.global_variables_initializer())
output, states = sess.run([out, state], feed_dict={x:[[[1,2,3],[2,3,4],[3,4,5]]]})
相反,你的意思是将每一个都视为一个序列,即:
step 1: a,x0
step 2: b,x0
step 3: c,x0
output: x1
step 1: b,x1
step 2: c,x1
step 3: d,x1
output: x2
etc...
然后,每次运行会话时,您都需要将最后一个状态作为输入提供给它:
...
in_seqlen = 3
input_dim = 1
hidden_dim = input_dim
x = tf.placeholder(tf.float32, [None, in_seqlen, input_dim])
s = tf.placeholder(tf.float32, [2, None, hidden_dim])
state_tuple = tf.nn.rnn_cell.LSTMStateTuple(s[0], s[1])
out, state = tf.nn.dynamic_rnn(lstm_cell(hidden_dim), x, initial_state=state_tuple, dtype=tf.float32)
...
sess.run(tf.global_variables_initializer())
batch_size = 1
init_state = np.zeros((2, batch_size, hidden_dim))
output, states = sess.run([out, state], feed_dict={x:[[[1],[2],[3]]], s:init_state})
#feed state of previous run
output, states = sess.run([out, state], feed_dict={x:[[[1],[2],[3]]], s:states})
您需要添加目标占位符,丢失等。
有用: TensorFlow: Remember LSTM state for next batch (stateful LSTM) http://colah.github.io/posts/2015-08-Understanding-LSTMs/