在第一个RNN示例之后,张量流嵌入不存在

时间:2016-12-20 07:20:23

标签: python tensorflow

我已经设置了一份打印声明,我注意到第一批喂食RNN时,嵌入存在,但是在第二批之后它们没有,我得到以下错误:

  

ValueError:变量RNNLM / RNNLM / Embedding / Adam_2 /不存在,或者未使用tf.get_variable()创建。你的意思是在VarScope中设置reuse = None吗?

以下是我生成嵌入的代码:

def add_embedding(self):
    with tf.device('/gpu:0'):
      embedding = tf.get_variable("Embedding", [len(self.vocab), self.config.embed_size])
      e_x = tf.nn.embedding_lookup(embedding, self.input_placeholder)
      inputs = [tf.squeeze(s, [1]) for s in tf.split(1, self.config.num_steps, e_x)] 
      return inputs

以下是该模型如何成为seutp,这是我怀疑问题所在的地方

def model(self, inputs):
   with tf.variable_scope("input_drop"):
      inputs_drop = [tf.nn.dropout(i, self.dropout_placeholder) for i in inputs]

    with tf.variable_scope("RNN") as scope:
      self.initial_state = tf.zeros([self.config.batch_size, self.config.hidden_size], tf.float32)
      state = self.initial_state
      states = []
      for t, e in enumerate(inputs_drop):
        print "t is {0}".format(t)
        if t > 0:
          scope.reuse_variables()
        H = tf.get_variable("Hidden", [self.config.hidden_size, self.config.hidden_size])
        I = tf.get_variable("I", [self.config.embed_size, self.config.hidden_size])
        b_1 = tf.get_variable("b_1", (self.config.hidden_size,))

        state = tf.sigmoid(tf.matmul(state, H) + tf.matmul(e, I) + b_1)
        states.append(state)

    with tf.variable_scope("output_dropout"):
      rnn_outputs = [tf.nn.dropout(o, self.dropout_placeholder) for o in states]
    return rnn_outputs

当我找到损失函数时出现问题,定义如下

def add_training_op(self, loss):
    opt = tf.train.AdamOptimizer(self.config.lr)
    train_op = opt.minimize(loss)
    return train_op

编辑:以下是一些帮助所有人的更新代码

 def __init__(self, config):
    self.config = config
    self.load_data(debug=False)
    self.add_placeholders()
    self.inputs = self.add_embedding()
    self.rnn_outputs = self.add_model(self.inputs)
    self.outputs = self.add_projection(self.rnn_outputs)
    self.predictions = [tf.nn.softmax(tf.cast(o, 'float64')) for o in self.outputs]
    output = tf.reshape(tf.concat(1, self.outputs), [-1, len(self.vocab)])
    self.calculate_loss = self.add_loss_op(output)
    self.train_step = self.add_training_op(self.calculate_loss)

此处的其他方法与add_projectioncalculate_loss有关,因此我们可以将其排除在外。

def add_loss_op(self, output):
   weights = tf.ones([self.config.batch_size * self.config.num_steps], tf.int32)
    seq_loss = tf.python.seq2seq.sequence_loss(
      [output], 
      tf.reshape(self.labels_placeholder, [-1]), 
      weights
      )
    tf.add_to_collection('total_loss', seq_loss)
    loss = tf.add_n(tf.get_collection('total_loss')) 
    return loss

def add_projection(self, rnn_outputs):
   with tf.variable_scope("Projection", initializer=tf.contrib.layers.xavier_initializer()) as scope:
      U = tf.get_variable("U", [self.config.hidden_size, len(self.vocab)])
      b_2 = tf.get_variable("b_2", [len(self.vocab)])

      outputs = [tf.matmul(x, U) + b_2 for x in rnn_outputs]
      return outputs


def train_RNNLM():
  config = Config()
  gen_config = deepcopy(config)
  gen_config.batch_size = gen_config.num_steps = 1

  with tf.variable_scope('RNNLM') as scope:
    model = RNNLM_Model(config)
    # This instructs gen_model to reuse the same variables as the model above
    scope.reuse_variables()
    gen_model = RNNLM_Model(gen_config)

  init = tf.initialize_all_variables()
  saver = tf.train.Saver()

  with tf.Session() as session:
    best_val_pp = float('inf')
    best_val_epoch = 0

    session.run(init)
    for epoch in xrange(config.max_epochs):
      print 'Epoch {}'.format(epoch)
      start = time.time()
      ###
      train_pp = model.run_epoch(
          session, model.encoded_train,
          train_op=model.train_step)
      valid_pp = model.run_epoch(session, model.encoded_valid)
      print 'Training perplexity: {}'.format(train_pp)
      print 'Validation perplexity: {}'.format(valid_pp)
      if valid_pp < best_val_pp:
        best_val_pp = valid_pp
        best_val_epoch = epoch
        saver.save(session, './ptb_rnnlm.weights')
      if epoch - best_val_epoch > config.early_stopping:
        break
      print 'Total time: {}'.format(time.time() - start)

2 个答案:

答案 0 :(得分:0)

似乎代码正在尝试在每个批处理中创建一个新的Adam变量。 可能两次调用add_training_op? 此外,def add_training_op的片段不完整,因为没有返回语句。

答案 1 :(得分:0)

问题原来是以下代码行:

model = RNNLM_Model(config)
    # This instructs gen_model to reuse the same variables as the model above
    scope.reuse_variables()
    gen_model = RNNLM_Model(gen_config)

事实证明,第二个模型是使用reuse_variables()的问题。通过问题删除这一行就消失了。