tf.train.import_meta_graph(' model.meta')无法关注seq2seq模型?

时间:2017-02-27 20:00:27

标签: python tensorflow sequence

环境: Ubuntu 16.04; TensorFlow v1.0.0(CPU)

尝试使用" tf.train.import_meta_graph(' model.meta')导入已保存的图表时,"我收到以下错误:

Traceback (most recent call last):
File "test_load.py", line 19, in new_saver = 
tf.train.import_meta_graph('model.meta') 
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", 
line 1577, in import_meta_graph **kwargs)

File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/meta_graph.py", 
line 498, in import_scoped_meta_graph producer_op_list=producer_op_list)

File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/importer.py", 
line 259, in import_graph_def 
raise ValueError('No op named %s in defined operations.' % node.op)
ValueError: No op named attn_add_fun_f32f32f32 in defined operations.

当我重新训练模型没有注意并使用相同的代码行导入图形时,不会抛出此错误。 是否加载了目前尚未受到关注的训练模型?以下是我的注意力实现:

attention_states = tf.transpose(self.encoder_outputs, [1, 0, 2])

(attention_keys,
 attention_values,
 attention_score_fn,
 attention_construct_fn) = seq2seq.prepare_attention(
            attention_states = attention_states,
            attention_option = "bahdanau",
            num_units         = self.decoder_cell.output_size)

 decoder_fn_train = seq2seq.attention_decoder_fn_train(
            encoder_state          = self.encoder_state,
            attention_keys         = attention_keys,
            attention_values       = attention_values,
            attention_score_fn     = attention_score_fn,
            attention_construct_fn = attention_construct_fn,
            name                   = 'attention_decoder')

decoder_fn_inference = seq2seq.attention_decoder_fn_inference(
            output_fn              = output_fn,
            encoder_state          = self.encoder_state,
            attention_keys         = attention_keys,
            attention_values       = attention_values,
            attention_score_fn     = attention_score_fn,
            attention_construct_fn = attention_construct_fn,
            embeddings             = self.embedding_matrix,
            start_of_sequence_id   = self.EOS,
            end_of_sequence_id     = self.EOS,
            maximum_length         = tf.reduce_max(self.encoder_inputs_length) + 3,
            num_decoder_symbols    = self.vocab_size,)

谢谢!

0 个答案:

没有答案