我正在尝试在多任务学习模型中重用以前定义的LSTM单元,但是我无法解决遇到的ValueError
在lstm单元中使用tf.AUTOREUSE进行了尝试,并添加了范围tf动态rnn函数。我还发布了错误以及当前代码段
我需要某种方式让我的网络将同一LSTM层用于多个任务
def LSTM_parser(self, sequence_in, lstm_sizes, keep_prob_, batch_size, biLSTM=False):
with tf.variable_scope("encoder") as scope:
cell = tf.contrib.rnn.BasicLSTMCell(self.lstm_dim, reuse=tf.AUTO_REUSE)
initial_state = cell.zero_state(batch_size, tf.float64)
lstm_output_1, final_state_1 = tf.nn.dynamic_rnn(cell, sequence_in, initial_state=initial_state,scope=scope)
return (lstm_output_1, final_state_1)
#Error:
output_data, final_state = build.LSTM_parser(sequence_in, lstm_size, keep_prob_, batch_size=batch_size, biLSTM=False)
ValueError: Variable encoder/basic_lstm_cell/kernel does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=tf.AUTO_REUSE in VarScope?