for i in range(y_word_max_len):
sub_decoder_input = gather(main_decoder,(i))
# print(sub_decoder_input)
sub_decoder_input_repeated = RepeatVector(y_char_max_len)(sub_decoder_input)
sub_decoder = LSTM(256,return_sequences=True,name='sub_decoder')(sub_decoder_input_repeated)
sub_decoder_output = TimeDistributed(Dense(58,activation='softmax'),name='sub_decoder_output')(sub_decoder)
sub_decoder_output_reshaped = Reshape((1,y_char_max_len,58))(sub_decoder_output)
print("Sub decoder output is ",sub_decoder_output_reshaped)
我已经写了上面的代码片段 y_word_max_len = 9
和 main_decoder 是一个形状的张量(无,9,256)
和 y_char_max_len = 7
58是我输出的大小 在片段被删除后,输出
子解码器输出为Tensor(“reshape_2 / Reshape:0”,shape =(?,1,7, 58),dtype = float32)
子解码器输出为Tensor(“reshape_3 / Reshape:0”,shape =(?,1,7, 58),dtype = float32)
子解码器输出为Tensor(“reshape_4 / Reshape:0”,shape =(?,1,7, 58),dtype = float32)
子解码器输出为Tensor(“reshape_5 / Reshape:0”,shape =(?,1,7, 58),dtype = float32)
子解码器输出为Tensor(“reshape_6 / Reshape:0”,shape =(?,1,7, 58),dtype = float32)
子解码器输出为Tensor(“reshape_7 / Reshape:0”,shape =(?,1,7, 58),dtype = float32)
子解码器输出为Tensor(“reshape_8 / Reshape:0”,shape =(?,1,7, 58),dtype = float32)
子解码器输出为Tensor(“reshape_9 / Reshape:0”,shape =(?,1,7, 58),dtype = float32)
子解码器输出为Tensor(“reshape_10 / Reshape:0”,shape =(?,1,7, 58),dtype = float32)
现在我想将这样获得的所有张量(9)连接成一个合成张量
形状(?,9,7,58)
我怎样才能在Keras实现这一目标。 感谢
答案 0 :(得分:0)
添加连接图层:
joined = Concatenate(axis=1)([sub1, sub2, sub3, sub4, sub5....])
为此,最好的方法是创建一个子扩展器列表并使用循环附加到此列表:
subTensors = []
for ..... :
#calculations
subTensors.append(sub_decoder_output_reshaped)
joined = Concatenate(axis=1)(subTensors)