分配要求两个张量的形状匹配。 lhs shape = [1024] rhs shape = [1200]

时间:2017-12-10 21:51:15

标签: tensorflow tensorflow-gpu

我是TensorFlow的新手,并尝试根据https://github.com/tensorflow/nmt/的教程创建自己的NMT。
我在恢复受过训练的推理模型时遇到错误:

  

分配要求两个张量的形状匹配。 lhs shape = [1024] rhs shape = [1200]

这是我认为发生的代码:

def _build_decoder_(self, encoder_outputs, encoder_state):
    tgt_sos_id = tf.cast(self.output_vocab_table.lookup(tf.constant('<SOS>')), tf.int32)
    tgt_eos_id = tf.cast(self.output_vocab_table.lookup(tf.constant('<EOS>')), tf.int32)
    with tf.variable_scope('decoder', reuse=self.reuse):
        batch_size = tf.size(self.batched_input.source_lengths)
        decoder_cell = tf.nn.rnn_cell.BasicLSTMCell(self.num_units)
        source_lengths = self.batched_input.source_lengths
        attention_mechanism = tf.contrib.seq2seq.LuongAttention(self.num_units, encoder_outputs,
                                                                memory_sequence_length=source_lengths)
        decoder_cell = tf.contrib.seq2seq.AttentionWrapper(decoder_cell, attention_mechanism,
                                                           attention_layer_size=self.num_units / 2,
                                                           alignment_history=(
                                                               self.mode == tf.contrib.learn.ModeKeys.INFER))
        initial_state = decoder_cell.zero_state(dtype=tf.float32, batch_size=batch_size)
        if self.mode != tf.contrib.learn.ModeKeys.INFER:
            target = self.batched_input.target
            target_lengths = self.batched_input.target_lengths
            embed_input = tf.nn.embedding_lookup(self.dec_embeddings, target)
            helper = tf.contrib.seq2seq.TrainingHelper(embed_input, target_lengths)
            decoder = tf.contrib.seq2seq.BasicDecoder(decoder_cell, helper=helper,
                                                      initial_state=initial_state, )
            max_decoder_length = tf.reduce_max(target_lengths)
            decoder_outputs, final_context_state, _ = tf.contrib.seq2seq.dynamic_decode(decoder=decoder,
                                                                                        impute_finished=True, )
            sample_id = decoder_outputs.sample_id
            # logits = decoder_outputs.rnn_output
            logits = self.projection_layer(decoder_outputs.rnn_output)
        else:
            start_tokens = tf.fill([batch_size], tgt_sos_id)
            end_token = tgt_eos_id
            helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(
                self.dec_embeddings, start_tokens, end_token)
            decoder = tf.contrib.seq2seq.BasicDecoder(decoder_cell, helper,
                                                      initial_state=initial_state,
                                                      output_layer=self.projection_layer)
            max_encoder_length = tf.reduce_max(self.batched_input.source_lengths)
            maximum_iterations = tf.to_int32(
                tf.round(tf.to_float(max_encoder_length) * self.decoding_length_factor))
            decoder_outputs, final_context_state, _ = tf.contrib.seq2seq.dynamic_decode(decoder=decoder,
                                                                                        impute_finished=True,
                                                                                        maximum_iterations=maximum_iterations)
            logits = decoder_outputs.rnn_output
            sample_id = decoder_outputs.sample_id
    return logits, sample_id, final_context_state

我认为这里的问题是保存模型时保存batch_size,但是我想不出任何其他方法来解决这个问题。

我正在使用此代码进行保存和恢复:

self.saver = tf.train.Saver(tf.global_variables(), max_to_keep=5)
infer_model.saver.restore(infer_sess, latest)

我还尝试设置reshape=True,但我仍无法恢复模型。

1 个答案:

答案 0 :(得分:0)

我现在修复它,只是一个人为错误,在另一种方法中切换变量(batch_sizenum_units)。