Tensorflow:注意输出与下一个解码器输入连接在一起,导致seq2seq模型中的尺寸不匹配

时间:2018-07-07 08:57:04

标签: python tensorflow nlp rnn seq2seq

[TF 1.8] 我正在尝试为玩具聊天机器人构建seq2seq模型以了解张量流和深度学习。我能够使用采样的softmax和波束搜索来训练和运行模型,但是随后尝试使用tf.contrib.seq2seq.AttentionWrapper应用tf.contrib.seq2seq.LuongAttention,在构建图形时出现以下错误:

ValueError: Dimensions must be equal, but are 384 and 256 for 'rnn/while/rnn/multi_rnn_cell/cell_0/basic_lstm_cell/MatMul_2' (op: 'MatMul') with input shapes: [64,384], [256,512].

这是我的模特

class ChatBotModel:

def __init__(self, inferring=False, batch_size=1, use_sample_sofmax=True):
    """forward_only: if set, we do not construct the backward pass in the model.
    """
    print('Initialize new model')
    self.inferring = inferring
    self.batch_size = batch_size
    self.use_sample_sofmax = use_sample_sofmax


    def build_graph(self):
        # INPUTS
        self.X = tf.placeholder(tf.int32, [None, None])
        self.Y = tf.placeholder(tf.int32, [None, None])
        self.X_seq_len = tf.placeholder(tf.int32, [None])
        self.Y_seq_len = tf.placeholder(tf.int32, [None])


        self.gl_step = tf.Variable(
                      0, dtype=tf.int32, trainable=False, name='global_step')

        single_cell = tf.nn.rnn_cell.BasicLSTMCell(128)
        keep_prob = tf.cond(tf.convert_to_tensor(self.inferring, tf.bool), lambda: tf.constant(
            1.0), lambda: tf.constant(0.8))
        single_cell = tf.contrib.rnn.DropoutWrapper(
            single_cell, output_keep_prob=keep_prob)
        encoder_cell = tf.contrib.rnn.MultiRNNCell([single_cell for _ in range(2)])

        # ENCODER         
        encoder_out, encoder_state = tf.nn.dynamic_rnn(
            cell = encoder_cell, 
            inputs = tf.contrib.layers.embed_sequence(self.X, 10000, 128),
            sequence_length = self.X_seq_len,
            dtype = tf.float32)
        # encoder_state is ((cell0_c, cell0_h), (cell1_c, cell1_h))

        # DECODER INPUTS
        after_slice = tf.strided_slice(self.Y, [0, 0], [self.batch_size, -1], [1, 1])
        decoder_inputs = tf.concat( [tf.fill([self.batch_size, 1], 2), after_slice], 1)

        # ATTENTION
        attention_mechanism = tf.contrib.seq2seq.LuongAttention(
            num_units = 128, 
            memory = encoder_out,
            memory_sequence_length = self.X_seq_len)

        # DECODER COMPONENTS
        Y_vocab_size = 10000
        decoder_cell = tf.contrib.rnn.MultiRNNCell([single_cell for _ in range(2)])
        decoder_cell = tf.contrib.seq2seq.AttentionWrapper(
            cell = decoder_cell,
            attention_mechanism = attention_mechanism,
            attention_layer_size=128)
        decoder_embedding = tf.Variable(tf.random_uniform([Y_vocab_size, 128], -1.0, 1.0))
        projection_layer = CustomDense(Y_vocab_size)
        if self.use_sample_sofmax:
            softmax_weight = projection_layer.kernel
            softmax_biases = projection_layer.bias

        if not self.inferring:
            # TRAINING DECODER
            training_helper = tf.contrib.seq2seq.TrainingHelper(
                inputs = tf.nn.embedding_lookup(decoder_embedding, decoder_inputs),
                sequence_length = self.Y_seq_len,
                time_major = False)

            decoder_initial_state = decoder_cell.zero_state(self.batch_size, dtype=tf.float32).clone(
                cell_state=encoder_state)

            training_decoder = tf.contrib.seq2seq.BasicDecoder(
                cell = decoder_cell,
                helper = training_helper,
                initial_state = decoder_initial_state,
                output_layer = projection_layer
            )
            training_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
                decoder = training_decoder,
                impute_finished = True,
                maximum_iterations = tf.reduce_max(self.Y_seq_len))
            training_logits = training_decoder_output.rnn_output

            # LOSS
            softmax_loss_function = None
            if self.use_sample_sofmax:
                def sampled_loss(labels, logits):
                    labels = tf.reshape(labels, [-1, 1])
                    return tf.nn.sampled_softmax_loss(weights=softmax_weight,
                                                      biases=softmax_biases,
                                                      labels=labels,
                                                      inputs=logits,
                                                      num_sampled=64,
                                                      num_classes=10000)
                softmax_loss_function = sampled_loss

            masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32)
            self.loss = tf.contrib.seq2seq.sequence_loss(logits = training_logits, targets = self.Y, weights = masks, softmax_loss_function=softmax_loss_function)

            # BACKWARD
            params = tf.trainable_variables()
            gradients = tf.gradients(self.loss, params)
            clipped_gradients, _ = tf.clip_by_global_norm(gradients, 5.0)
            self.train_op = tf.train.AdamOptimizer().apply_gradients(zip(clipped_gradients, params), global_step=self.gl_step)
        else:
            encoder_states = []
            for i in range(2):
                if isinstance(encoder_state[i],tf.contrib.rnn.LSTMStateTuple):
                    encoder_state_c = tf.contrib.seq2seq.tile_batch(encoder_state[i].c, multiplier=2)
                    encoder_state_h = tf.contrib.seq2seq.tile_batch(encoder_state[i].h, multiplier=2)
                    encoder_state = tf.contrib.rnn.LSTMStateTuple(c=encoder_state_c, h=encoder_state_h)
                encoder_states.append(encoder_state)
            encoder_states = tuple(encoder_states)

            predicting_decoder = tf.contrib.seq2seq.BeamSearchDecoder(
                cell = decoder_cell,
                embedding = decoder_embedding,
                start_tokens = tf.tile(tf.constant([2], dtype=tf.int32), [self.batch_size]),
                end_token = 3,
                initial_state = decoder_initial_state,
                beam_width = 2,
                output_layer = projection_layer,
                length_penalty_weight = 0.0)
            predicting_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
                decoder = predicting_decoder,
                impute_finished = False,
                maximum_iterations = 4 * tf.reduce_max(self.Y_seq_len))
            self.predicting_logits = predicting_decoder_output.predicted_ids

回溯几行日志,我发现错误发生在这里:

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/rnn_cell_impl.py in call(self, inputs, state)
    636 
    637     gate_inputs = math_ops.matmul(
--> 638         array_ops.concat([inputs, h], 1), self._kernel)
    639     gate_inputs = nn_ops.bias_add(gate_inputs, self._bias)

我已经检查了LSTM单元的'h'张量,它的形状为[batch_size,128],所以我的猜测是前一解码步骤的注意输出与当前编码器的输入连接在一起,使'输入'的形状为[batch_size,256],然后与'h'张量连接在一起,形成一个[batch_size,384]张量,导致此错误。

我的问题是:注意力输出不是应该与下一个解码器的输入连接吗,还是我错过了任何理解?以及如何解决此错误。

1 个答案:

答案 0 :(得分:0)

您可能已经找到了答案,但是对于也遇到此错误的偷窥者(像我一样),请专注于第二种形状。它指定为[256,512]。现在将代码打开到“ rnn_cell_impl.py”,然后转到进行concat操作的行。您会注意到,内核形状被报告为与解码器输入不同步(形状为num_units + attention_layer_size作为第一个维度,第0个为您的batch_size)。

基本上,您也使用在解码器中为编码器单元创建的相同单元(它的2层lstm,右128?),因此内核大小显示为256,512。要解决此问题,请在这2个之间的行中添加

Y_vocab_size = 10000
## create new decoder base rnn cell 
decode_op_cell = tf.nn.rnn_cell.BasicLSTMCell(128)
## create new decoder base rnn cell
decoder_cell = tf.contrib.rnn.MultiRNNCell([decode_op_cell for _ in range(2)])

现在,如果您可以在出现错误的同一行中可视化代码,您将看到[64,384]和[384,512] (这是合法的多项操作,应该可以解决您的错误)当然,无论您要添加什么辍学内容,都可以随时添加到该encode_op_cell中。