模块'tensorflow.contrib.seq2seq'没有属性'simple_decoder_fn_train'

时间:2019-04-04 05:53:31

标签: python tensorflow machine-learning deep-learning chatbot

当使用tensorflow 1.13.1时出现此错误消息。对这个问题有何想法?

错误消息

AttributeError                            Traceback (most recent call last)
<ipython-input-40-32a0c216e33b> in <module>
     12         tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int),
     13         len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers,
---> 14         target_vocab_to_int, attn_length)
     15 
     16     # Create a tensor to be used for making predictions.

<ipython-input-38-ae61a93c0a57> in seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, vocab_to_int, attn_length)
     13     train_logits, infer_logits = decoding_layer(dec_embed_input, dec_embeddings, enc_state, target_vocab_size+1, 
     14                                                 sequence_length, rnn_size, num_layers, vocab_to_int, keep_prob,
---> 15                                                 attn_length)
     16 
     17     return train_logits, infer_logits

<ipython-input-37-aea2a940da68> in decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, vocab_to_int, keep_prob, attn_length)
     19 
     20         train_logits = decoding_layer_train(
---> 21             encoder_state[0], dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)
     22         decoding_scope.reuse_variables()
     23         infer_logits = decoding_layer_infer(encoder_state[0], dec_cell, dec_embeddings, vocab_to_int['<GO>'],

<ipython-input-35-7f5fedb3a13f> in decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)
      2                          output_fn, keep_prob):
      3     '''Decode the training data'''
----> 4     train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
      5     train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
      6         dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)

AttributeError: module 'tensorflow.contrib.seq2seq' has no attribute 'simple_decoder_fn_train'

代码:

def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
                     output_fn, keep_prob):
'''Decode the training data'''
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
    dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
train_pred_drop = tf.nn.dropout(train_pred, keep_prob)
return output_fn(train_pred_drop)

train_graph = tf.Graph()
with train_graph.as_default():

    # Load the model inputs
    input_data, targets, lr, keep_prob = model_inputs()
    # Sequence length will be the max line length for each batch
    sequence_length = tf.placeholder_with_default(max_line_length, None, name='sequence_length')
    input_shape = tf.shape(input_data)

    # Create the logits from the model
    train_logits, inference_logits = seq2seq_model(
        tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), 
        len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, 
        target_vocab_to_int, attn_length)

    # Create a tensor to be used for making predictions.
    tf.identity(inference_logits, 'logits')
    with tf.name_scope("optimization"):
        # Loss function
        cost = tf.contrib.seq2seq.sequence_loss(
            train_logits,
            targets,
            tf.ones([input_shape[0], sequence_length]))

        # Optimizer
        optimizer = tf.train.AdamOptimizer(learning_rate)

        # Gradient Clipping
        gradients = optimizer.compute_gradients(cost)
        capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
        train_op = optimizer.apply_gradients(capped_gradients)

1 个答案:

答案 0 :(得分:0)

我猜您正在使用的Tensorflow的版本不好。 由于this GitHub ticket的原因,

  

此实现使用API​​r1.0.1

由于您的Tensorflow版本不同,因此会导致错误。