在sequence_to_sequence_implementation.ipynb中将batch_size设置为1时遇到错误(当batch_size> 1,没关系)

时间:2017-08-29 00:05:08

标签: python tensorflow encoder-decoder

参考: https://github.com/udacity/deep-learning/blob/master/seq2seq/sequence_to_sequence_implementation.ipynb

设定: encoding_embedding_size = decoding_embedding_size = 200

错误信息:     Traceback(最近一次调用最后一次):     文件" my_encoder_decoder.py",第369行,in     主(sys.argv中[1])     文件" my_encoder_decoder.py",第346行,在main中     my_qa.train()     火车上的文件" my_encoder_decoder.py",第298行     model.build_graph()     文件" D:\ vmware \ share \ test \ chat \ seq2seqmodel.py",第197行,在build_graph中     self._decoding_layer()     文件" D:\ vmware \ share \ test \ chat \ seq2seqmodel.py",第130行,在_decoding_layer中     inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(inference_decoder,impute_finished = False,maximum_iterations = self.max_target_sequence_length)[0]     文件" d:\ Program Files \ Anaconda3 \ lib \ site-packages \ tensorflow \ contrib \ seq2seq \ python \ ops \ decoder.py",第286行,在dynamic_decode中     swap_memory = swap_memory)     文件" d:\ Program Files \ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ ops \ control_flow_ops.py",第2770行,在while_loop中     result = context.BuildLoop(cond,body,loop_vars,shape_invariants)     BuildLoop中的文件" d:\ Program Files \ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ ops \ control_flow_ops.py",第2599行     pred,body,original_loop_vars,loop_vars,shape_invariants)     _BuildLoop中的文件" d:\ Program Files \ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ ops \ control_flow_ops.py",第2580行     _EnforceShapeInvariant(m_var,n_var)     _EnforceShapeInvariant中的文件" d:\ Program Files \ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ ops \ control_flow_ops.py",第575行     %(merge_var.name,m_shape,n_shape))     ValueError:decode_1 / decoder / while / Merge_5:0的形状不是循环的不变量。它进入具有形状(1,200)的循环,但在一次迭代后具有形状(?,200)。使用tf.while_loop的shape_invariants参数或循环变量的set_shape()提供形状不变量。

有谁知道如何解决它?

0 个答案:

没有答案