如何并行化自定义lstm(4d输入)

时间:2019-04-19 08:46:30

标签: tensorflow keras lstm summarize

置换层后,尺寸变为(无,无,12、16) 我想用带有input_shape(12,16)的LSTM(48个单位)总结最后两个维度 这样整体尺寸就变成(无,无,48)

当前,我对自定义lstm&lstmcell有了一种解决方法,但是它的速度很慢,因为我在单元格中使用了另一个LSTM

我想要的是这样的

sf = 'Hexagone_125_km/Hexagone_125_km.shp'
shp = gpd.read_file(sf)
shp.crs = {'init': 'epsg:4326'}
shp['sum'] = 1  # for example, fill sum with something
shp.plot(figsize=(20,20), column='sum', cmap='gnuplot', alpha=1, legend=True)

最后两个是在自定义lstm中完成的(当前),有没有办法将它们分开?

执行此操作的正确方法是什么? 我们是否可以为权重相同但单元状态不同的单元创建不同的(或多个)lstm? 你能给我一些指导吗?

inputs(InputLayer)(无,36,无,1)0


convlayer(Conv2D)(无,36,无,16)160个输入[0] [0]


mp(MaxPooling2D)(无,12,无,16)0 convlayer [0] [0]


permute_1(置换)(无,无,12、16)0 mp [0] [0]


reshape_1(重塑)(无,无,192)0 permute_1 [0] [0]


custom_lstm_extended_1(CustomL(无,无,60)26160 reshape_1 [0] [0]

自定义LSTM的调用方式如下: CustomLSTMExtended(units = 60,summaryUnits = 48,return_sequences = True,return_state = False,input_shape =(None,192))(内部)

(None, None, 12, 16)
(None, None, 48)
(None, None, 60)
LSTM class:
self.summarizeUnits = summarizeUnits
self.summarizeLSTM = CuDNNLSTM(summarizeUnits, input_shape=(None, 16), return_sequences=False, return_state=True)

        cell = SummarizeLSTMCellExtended(self.summarizeLSTM, units,
                activation=activation,
                recurrent_activation=recurrent_activation,
                use_bias=use_bias,
                kernel_initializer=kernel_initializer,
                recurrent_initializer=recurrent_initializer,
                unit_forget_bias=unit_forget_bias,
                bias_initializer=bias_initializer,
                kernel_regularizer=kernel_regularizer,
                recurrent_regularizer=recurrent_regularizer,
                bias_regularizer=bias_regularizer,
                kernel_constraint=kernel_constraint,
                recurrent_constraint=recurrent_constraint,
                bias_constraint=bias_constraint,
                dropout=dropout,
                recurrent_dropout=recurrent_dropout,
                implementation=implementation)


        RNN.__init__(self, cell,
                                   return_sequences=return_sequences,
                                   return_state=return_state,
                                   go_backwards=go_backwards,
                                   stateful=stateful,
                                   unroll=unroll,
                                   **kwargs)

1 个答案:

答案 0 :(得分:0)

我使用tf.reshape而不是keras Reshape层来完成此操作。 Keras重塑层不希望您干扰“ batch_size”尺寸

shape = Lambda(lambda x: tf.shape(x), output_shape=(4,))(inner)
..
..
inner = Lambda(lambda x : customreshape(x), output_shape=(None, 48))([inner, shape])
..
def customreshape(inputs):
    inner = inputs[0]
    shape = inputs[1]
    import tensorflow as tf2 
    reshaped = tf2.reshape(inner, [shape[0], shape[1], 48] )
    return reshaped