我是Tensorflow的新手,并且一直在研究this link中一个简单的聊天机器人构建项目。
许多警告都说Tensorflow 2.0中将弃用某些东西,我应该升级,所以我这样做了。然后,我使用自动Tensorflow code upgrader将所有必需的文件更新为2.0。这有一些错误。
在处理model.py文件时,它返回以下警告:
133:20: WARNING: tf.nn.sampled_softmax_loss requires manual check. `partition_strategy` has been removed from tf.nn.sampled_softmax_loss. The 'div' strategy will be used by default.
148:31: WARNING: Using member tf.contrib.rnn.DropoutWrapper in deprecated module tf.contrib.rnn. (Manual edit required) tf.contrib.rnn.* has been deprecated, and widely used cells/functions will be moved to tensorflow/addons repository. Please check it there and file Github issues if necessary.
148:31: ERROR: Using member tf.contrib.rnn.DropoutWrapper in deprecated module tf.contrib. tf.contrib.rnn.DropoutWrapper cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
171:33: ERROR: Using member tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq in deprecated module tf.contrib. tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
197:27: ERROR: Using member tf.contrib.legacy_seq2seq.sequence_loss in deprecated module tf.contrib. tf.contrib.legacy_seq2seq.sequence_loss cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
我遇到的主要问题是使用现在不存在的contrib
模块中的代码。我如何才能适应以下三个代码块,使其在Tensorflow 2.0中正常工作?
# Define the network
# Here we use an embedding model, it takes integer as input and convert them into word vector for
# better word representation
decoderOutputs, states = tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq(
self.encoderInputs, # List<[batch=?, inputDim=1]>, list of size args.maxLength
self.decoderInputs, # For training, we force the correct output (feed_previous=False)
encoDecoCell,
self.textData.getVocabularySize(),
self.textData.getVocabularySize(), # Both encoder and decoder have the same number of class
embedding_size=self.args.embeddingSize, # Dimension of each word
output_projection=outputProjection.getWeights() if outputProjection else None,
feed_previous=bool(self.args.test) # When we test (self.args.test), we use previous output as next input (feed_previous)
)
# Finally, we define the loss function
self.lossFct = tf.contrib.legacy_seq2seq.sequence_loss(
decoderOutputs,
self.decoderTargets,
self.decoderWeights,
self.textData.getVocabularySize(),
softmax_loss_function= sampledSoftmax if outputProjection else None # If None, use default SoftMax
)
encoDecoCell = tf.contrib.rnn.DropoutWrapper(
encoDecoCell,
input_keep_prob=1.0,
output_keep_prob=self.args.dropout
)
答案 0 :(得分:1)
tf.contrib
基本上是TensorFlow社区的贡献,其工作方式如下。
现在在tensorflow 2中,Tensorflow删除了contrib,现在contrib中的每个项目都有其未来的三个选择之一:移至核心;移至单独的存储库;或删除。
您可以检查link中属于哪个类别的所有项目列表。
作为替代解决方案,从Tensorflow 1到Tensorflow 2的移植代码不会自动发生,您必须手动更改。
您可以改用以下替代方法。
tf.contrib.rnn.DropoutWrapper
可以将其更改为tf.compat.v1.nn.rnn_cell.DropoutWrapper
要顺序排序,可以使用TensorFlow Addons
。
TensorFlow Addons项目包括许多序列到序列 使您可以轻松构建可用于生产环境的编码器-解码器的工具。
例如,您可以使用类似以下的内容。
import tensorflow_addons as tfa
encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32)
embeddings = keras.layers.Embedding(vocab_size, embed_size)
encoder_embeddings = embeddings(encoder_inputs)
decoder_embeddings = embeddings(decoder_inputs)
encoder = keras.layers.LSTM(512, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_embeddings)encoder_state = [state_h, state_c]
sampler = tfa.seq2seq.sampler.TrainingSampler()
decoder_cell = keras.layers.LSTMCell(512)
output_layer = keras.layers.Dense(vocab_size)
decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell, sampler,
output_layer=output_layer)
final_outputs, final_state, final_sequence_lengths = decoder(
decoder_embeddings, initial_state=encoder_state,
sequence_length=sequence_lengths)
Y_proba = tf.nn.softmax(final_outputs.rnn_output)
model = keras.Model(inputs=[encoder_inputs, decoder_inputs,
sequence_lengths],
outputs=[Y_proba])
您需要使用tf.contrib将所有方法更改为兼容方法。
我希望这能回答您的问题。