使用CNTK通过在每个生成步骤采样来生成序列

时间:2017-08-14 09:08:17

标签: python reinforcement-learning decoder cntk sequence-to-sequence

在带有编码器和解码器的seq2seq模型中,在每个生成步骤中,softmax层输出整个词汇表的分布。在CNTK中,可以使用C.hardmax函数轻松实现贪心解码器。它看起来像这样。

def create_model_greedy(s2smodel):
    # model used in (greedy) decoding (history is decoder's own output)
    @C.Function
    @C.layers.Signature(InputSequence[C.layers.Tensor[input_vocab_dim]])
    def model_greedy(input): # (input*) --> (word_sequence*)
        # Decoding is an unfold() operation starting from sentence_start.
        # We must transform s2smodel (history*, input* -> word_logp*) into a generator (history* -> output*)
        # which holds 'input' in its closure.
        unfold = C.layers.UnfoldFrom(lambda history: s2smodel(history, input) >> **C.hardmax**,
                                     # stop once sentence_end_index was max-scoring output
                                     until_predicate=lambda w: w[...,sentence_end_index],
                                     length_increase=length_increase)
        return unfold(initial_state=sentence_start, dynamic_axes_like=input)
    return model_greedy

但是,在每个步骤中我都不想以最大概率输出令牌。相反,我想要一个随机解码器,它根据词汇表的概率分布生成一个令牌。

我该怎么做?任何帮助表示赞赏。感谢。

2 个答案:

答案 0 :(得分:3)

在使用hardmax之前,您可以在输出中添加噪音。特别是,您可以使用C.random.gumbelC.random.gumbel_like按比例对exp(output)进行抽样。这被称为gumbel-max trickcntk.random模块也包含其他分布,但如果您有日志概率,则最有可能希望在hardmax之前添加gumbel噪声。一些代码:

@C.Function
def randomized_hardmax(x):
    noisy_x = x + C.random.gumbel_like(x)
    return C.hardmax(noisy_x)

然后将hardmax替换为randomized_hardmax

答案 1 :(得分:0)

非常感谢Nikos Karampatziakis。

如果您希望使用随机采样解码器生成与目标序列长度相同的序列,则以下代码有效。

@C.Function
def sampling(x):
    noisy_x = x + C.random.gumbel_like(x)
    return C.hardmax(noisy_x)

def create_model_sampling(s2smodel):
    @C.Function
    @C.layers.Signature(input=InputSequence[C.layers.Tensor[input_vocab_dim]],
                        labels=LabelSequence[C.layers.Tensor[label_vocab_dim]])
    def model_sampling(input, labels): # (input*) --> (word_sequence*)
        unfold = C.layers.UnfoldFrom(lambda history: s2smodel(history, input) >> sampling,
                                     length_increase=1)
        return unfold(initial_state=sentence_start, dynamic_axes_like=labels)
    return model_sampling