RNN初始状态是否为后续的小批量重置?

时间:2016-07-18 16:18:38

标签: time-series tensorflow recurrent-neural-network

有人可以澄清TF中RNN的初始状态是否为后续的小批量重置,或者是否使用Ilya Sutskever et al., ICLR 2015 中提到的上一个小批量的最后状态?

2 个答案:

答案 0 :(得分:19)

tf.nn.dynamic_rnn()tf.nn.rnn()操作允许使用initial_state参数指定RNN的初始状态。如果未指定此参数,则隐藏状态将在每个训练批次开始时初始化为零向量。

在TensorFlow中,您可以在tf.Variable()中包含张量,以便在多个会话运行之间将其值保留在图表中。只需确保将它们标记为不可训练,因为优化器默认调整所有可训练变量。

data = tf.placeholder(tf.float32, (batch_size, max_length, frame_size))

cell = tf.nn.rnn_cell.GRUCell(256)
state = tf.Variable(cell.zero_states(batch_size, tf.float32), trainable=False)
output, new_state = tf.nn.dynamic_rnn(cell, data, initial_state=state)

with tf.control_dependencies([state.assign(new_state)]):
    output = tf.identity(output)

sess = tf.Session()
sess.run(tf.initialize_all_variables())
sess.run(output, {data: ...})

我没有测试过这段代码,但它应该给你一个正确方向的提示。还有一个tf.nn.state_saving_rnn()可以提供状态保护程序对象,但我还没有使用它。

答案 1 :(得分:8)

除了danijar的回答,这里是LSTM的代码,其状态是元组(state_is_tuple=True)。它还支持多个层。

我们定义了两个函数 - 一个用于获取具有初始零状态的状态变量和一个用于返回操作的函数,我们可以将其传递给session.run以便使用LSTM的最后隐藏状态更新状态变量

def get_state_variables(batch_size, cell):
    # For each layer, get the initial state and make a variable out of it
    # to enable updating its value.
    state_variables = []
    for state_c, state_h in cell.zero_state(batch_size, tf.float32):
        state_variables.append(tf.contrib.rnn.LSTMStateTuple(
            tf.Variable(state_c, trainable=False),
            tf.Variable(state_h, trainable=False)))
    # Return as a tuple, so that it can be fed to dynamic_rnn as an initial state
    return tuple(state_variables)


def get_state_update_op(state_variables, new_states):
    # Add an operation to update the train states with the last state tensors
    update_ops = []
    for state_variable, new_state in zip(state_variables, new_states):
        # Assign the new state to the state variables on this layer
        update_ops.extend([state_variable[0].assign(new_state[0]),
                           state_variable[1].assign(new_state[1])])
    # Return a tuple in order to combine all update_ops into a single operation.
    # The tuple's actual value should not be used.
    return tf.tuple(update_ops)

与danijar的答案类似,我们可以使用它来更新每批后的LSTM状态:

data = tf.placeholder(tf.float32, (batch_size, max_length, frame_size))
cells = [tf.contrib.rnn.GRUCell(256) for _ in range(num_layers)]
cell = tf.contrib.rnn.MultiRNNCell(cells)

# For each layer, get the initial state. states will be a tuple of LSTMStateTuples.
states = get_state_variables(batch_size, cell)

# Unroll the LSTM
outputs, new_states = tf.nn.dynamic_rnn(cell, data, initial_state=states)

# Add an operation to update the train states with the last state tensors.
update_op = get_state_update_op(states, new_states)

sess = tf.Session()
sess.run(tf.global_variables_initializer())
sess.run([outputs, update_op], {data: ...})

主要区别在于state_is_tuple=True使LSTM的状态成为包含两个变量(单元状态和隐藏状态)的LSTMStateTuple,而不仅仅是单个变量。使用多个层然后使LSTM的状态成为LSTMStateTuples的元组 - 每层一个。