变量重用,并在张量流中训练/预测分裂

时间:2018-04-26 20:04:57

标签: python tensorflow lstm rnn

我试图让我的LSTM RNN语言模型基于tensorflow的ptb tutorial

工作

但是我对如何使用和重用变量以及如何将我的代码组织到训练和测试(预测)部分感到困惑。最初我的代码是这样的:

class RNNModel(object):
    def __init__(self, input_data, config, is_training=True):
        self.input_x = input_data[0]
        input_y = input_data[1]

    with tf.device('/cpu:0'):
        self.embedding_table = tf.Variable(
            tf.random_uniform([vocab_size, embedding_size], -1.0, 1.0),
            name="word_embeddings")
        self.embeddings = tf.nn.embedding_lookup(self.embedding_table, self.input_x)

        #LSTM layer

        output, final_state = self._build_rnn_graph(self.embeddings, config)

        self.softmax_w = tf.get_variable("W", [config.hidden_size, vocab_size], dtype=tf.float32, initializer=tf.contrib.layers.xavier_initializer()) 
        self.softmax_b = tf.get_variable("b", [vocab_size], dtype=tf.float32)  
        logits = tf.nn.xw_plus_b(output, self.softmax_w, self.softmax_b)


        # Reshape logits to be a 3-D tensor for sequence loss
        logits = tf.reshape(logits, [tf.shape(self.input_x)[0], config.num_steps, vocab_size])
        print ('logits shape=', logits.shape)

        # Use the contrib sequence loss and average over the batches
        loss = tf.contrib.seq2seq.sequence_loss(
            logits,
            input_y,
            tf.ones([tf.shape(self.input_x)[0], config.num_steps]),
            average_across_timesteps=True, # False
            average_across_batch=True)

        self._cost = loss
        self.train_op = optimizer.minimize(loss)



 def _build_rnn_graph(self, inputs, config, is_training):
    cell = tf.nn.rnn_cell.LSTMCell(config.hidden_size, forget_bias=0.0, state_is_tuple=True, reuse=not is_training)
    self._initial_state = cell.zero_state(tf.shape(self.input_x)[0], tf.float32)

    output, final_state = tf.nn.static_rnn(cell=cell, inputs=inputs, dtype=tf.float32, initial_state=self._initial_state)
    return output, final_state
之后我有:

 dataset = tf.data.Dataset.from_tensor_slices((x, y)).shuffle(buffer_size=100).batch(batch_size)
 iter = dataset.make_initializable_iterator()
 features, labels = iter.get_next()

 model = RNNLM(input_data = [features, labels], vocab=vocab, config=config, is_training=True)

 counter = 1
 with tf.Session() as sess:
     sess.run(iter.initializer, feed_dict={ x: inputs[0],
                                            y: inputs[1] }) 
     sess.run(tf.global_variables_initializer())
    for e in range(max_epoch):        
        try:
            while(True):
                _, x_entropy, cur_input = sess.run([model.train_op, model.cost, model.input_x])
                print('batch:%d, batch_shape=%s, cross entropy=%f, '%(counter, cur_input.shape, x_entropy)) 
                counter += 1
        except tf.errors.OutOfRangeError:
            counter = 1
            sess.run(iter.initializer, feed_dict={ x: inputs[0],
                                           y: inputs[1] }) 

上面的代码有效,除了我无法在另一个测试数据集上运行或使用新输入预测下一个单词,因为会话已结束。

所以我的第一个问题是如何在不保存/从磁盘重新加载模型的情况下运行不同的输入集?

我想要一个像这样的方法:

Class RNNModel:
    def __init__(self):
        self.session = tf.Session()
        ...
   def train_batch(self, x,y):
       '''define embedding, lstm cell etc here'''
       cell = tf.nn.rnn_cell.LSTMCell(hidden_size, state_is_tuple=True, reuse=???)
       tf.nn.static_rnn(cell=cell, inputs=inputs)

   def train(self, X, Y):
      for e in epochs:
         for x, y in a_batch:
              self.session.run(train_batch(self, x,y))

   def predict(self, newX):
       cell = tf.nn.rnn_cell.LSTMCell(hidden_size, state_is_tuple=True, reuse=???)

       output, state = tf.nn.static_rnn(cell=cell, inputs=inputs)
       prediction = tf.nn.softmax(output)
       self.session.run(prediction)

最后,在我的主要街区,我可以做到:

model = RNNModel(...)
model.train(X, Y)
model.predict(new_X)

毫不奇怪,这给了我错误,排成一行:

cell = tf.nn.rnn_cell.LSTMCell(hidden_size, state_is_tuple=True, reuse=???)

要么说这个变量不存在(如果我设置reuse = False) 或

  

变量rnn / lstm_cell / kernel已经存在,不允许。你的意思是   在VarScope中设置reuse = True或reuse = tf.AUTO_REUSE? ...

由于我将在网络上进行许多训练循环,我想我总是需要 reuse = True ?但同样,为什么我的原始代码不起作用?这有什么不同?这种简单/标准情景的“正确”方法是什么?

非常感谢!

0 个答案:

没有答案