如何在张量流中创建独立的LSTM单元?

时间:2017-11-07 05:28:19

标签: tensorflow time-series classification lstm rnn

我正在尝试制作一个RNN分类器,它有3个不同的时间序列,每个时间序列有3个维度作为输入,时间序列可以有不同的长度。因此,为了解决这个问题,我建模了3个RNN并将它们连接到最后一层。

但是,我收到以下错误消息:

  

ValueError:变量rnn / multi_rnn_cell / cell_0 / basic_lstm_cell / kernel   已经存在,不允许。你的意思是设置reuse = True in   VarScope?

timeSeries = ['outbound', 'rest', 'return']
n_steps = {
    'outbound': 3159,
    'rest': 3603,
    'return': 3226
}
n_inputs = 3
n_neurons = 20
n_outputs = 2
n_layers = 1

learning_rate = 0.001


y = tf.placeholder(tf.int32, [None], name="y")
X = {}
seq_length = {}
for timeSeriesName in timeSeries:
    with tf.name_scope(timeSeriesName + "_placeholders") as scope:
        X[timeSeriesName] = tf.placeholder(tf.float32, [None, n_steps[timeSeriesName], n_inputs])
        seq_length[timeSeriesName] = tf.placeholder(tf.int32, [None])


outputs = {}
states = {}
top_layer_h_state = {}
lstm_cells = {}
multi_cell = {}
finalRNNlayers = []
for timeSeriesName in timeSeries:
    with tf.name_scope(timeSeriesName) as scope:
        lstm_cells[timeSeriesName] = [tf.contrib.rnn.BasicLSTMCell(num_units=n_neurons)
                                      for layer in range(n_layers)]
        multi_cell[timeSeriesName] = tf.contrib.rnn.MultiRNNCell(lstm_cells[timeSeriesName])
        outputs[timeSeriesName], states[timeSeriesName] = tf.nn.dynamic_rnn(
            multi_cell[timeSeriesName], X[timeSeriesName], dtype=tf.float32,
            sequence_length=seq_length[timeSeriesName])
        top_layer_h_state[timeSeriesName] = states[timeSeriesName][-1][1]
        finalRNNlayers.append(top_layer_h_state[timeSeriesName])

with tf.name_scope("3Stages_mixed") as scope:
    concat3_top_layer_h_states = tf.concat(finalRNNlayers, axis=1)
    logits = tf.layers.dense(concat3_top_layer_h_states, n_outputs, name="softmax")

我希望每个时间序列都有独立的LSTM单元格,每个单元格都有自己的权重,因此重用不是一个选项,应该如何修复此错误?

The full traceback of the error can be found here

1 个答案:

答案 0 :(得分:2)

tf.name_scope(timeSeriesName)更改为tf.variable_scope(timeSeriesName)tf.name_scopetf.variable_scope之间的差异在this quesion中讨论。在您的情况下,重要的是tf.get_variable忽略名称范围,并且使用tf.get_variable精确创建LSTM单元格参数。

示例代码以查看差异:

import tensorflow as tf

state = tf.zeros([32, 6])

input1 = tf.placeholder(tf.float32, [32, 10])
input2 = tf.placeholder(tf.float32, [32, 10])

# Works ok:
with tf.variable_scope('scope-1'):
  tf.nn.rnn_cell.BasicLSTMCell(3, state_is_tuple=False)(input1, state)
with tf.variable_scope('scope-2'):
  tf.nn.rnn_cell.BasicLSTMCell(3, state_is_tuple=False)(input2, state)

# Fails:
with tf.name_scope('name-1'):
  tf.nn.rnn_cell.BasicLSTMCell(3, state_is_tuple=False)(input1, state)
with tf.name_scope('name-2'):
  tf.nn.rnn_cell.BasicLSTMCell(3, state_is_tuple=False)(input2, state)