张量流教程中位置编码的大小

时间:2019-05-10 16:45:00

标签: tensorflow transformer

我试图理解并玩这个有关转换器架构的tensorflow教程,但发现在类解码器中我不了解某些内容。为什么使用targe_vocab_size而不是序列的最大长度来调用self.pos_encoding = positional_encoding(target_vocab_size,self.d_model)?请参阅下面的类的此链接和代码。任何想法? https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/text/transformer.ipynb

class Decoder(tf.keras.layers.Layer):
  def __init__(self, num_layers, d_model, num_heads, dff, target_vocab_size, 
               rate=0.1):
    super(Decoder, self).__init__()

    self.d_model = d_model
    self.num_layers = num_layers

    self.embedding = tf.keras.layers.Embedding(target_vocab_size, d_model)
    self.pos_encoding = positional_encoding(target_vocab_size, self.d_model)

    self.dec_layers = [DecoderLayer(d_model, num_heads, dff, rate) 
                       for _ in range(num_layers)]
    self.dropout = tf.keras.layers.Dropout(rate)
    def call(self, x, enc_output, training, 
           look_ahead_mask, padding_mask):

    seq_len = tf.shape(x)[1]
    attention_weights = {}

    x = self.embedding(x)  # (batch_size, target_seq_len, d_model)
    x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
    x += self.pos_encoding[:, :seq_len, :]

    x = self.dropout(x, training=training)

    for i in range(self.num_layers):
      x, block1, block2 = self.dec_layers[i](x, enc_output, training,
                                             look_ahead_mask, padding_mask)

      attention_weights['decoder_layer{}_block1'.format(i+1)] = block1
      attention_weights['decoder_layer{}_block2'.format(i+1)] = block2

    # x.shape == (batch_size, target_seq_len, d_model)
    return x, attention_weights

1 个答案:

答案 0 :(得分:0)

好的,我想我已经说服自己该教程有一个错误。构造位置嵌入self.pos_encoding = positional_encoding(target_vocab_size, self.d_model)时,应使用MAX_LENGTH而不是target_vocab_size。这解决了我使用较小的词汇量和较长的句子时遇到的许多问题。本教程中的示例没有中断,因为在其示例target_vocab_size > MAX_LENGTH中,因此其设置没有问题。