tensorflow是否允许LSTM反卷积(convlstm2d)和2D卷积一样?

时间:2018-05-05 05:37:48

标签: tensorflow keras lstm convolution

我正在努力扩充网络。对于卷积部分,我使用来自keras的convlstm2d。是否有进行反卷积的过程(即lstmdeconv2d?)

2 个答案:

答案 0 :(得分:0)

Conv3D,请查看此示例used to predict the next frame

答案 1 :(得分:0)

应该可以将任何模型与TimeDistributed包装器组合在一起。因此,您可以创建一个deconv模型,并使用TimeDistributed包装器将其应用于LSTM的输出(向量序列)。

一个例子。首先使用Conv2DTranspose图层创建一个deconv网络。

from keras.models import Model
from keras.layers import LSTM,Conv2DTranspose, Input, Activation, Dense, Reshape, TimeDistributed

# Hyperparameters
layer_filters = [32, 64]

# Deconv Model 
# (adapted from https://github.com/keras-team/keras/blob/master/examples/mnist_denoising_autoencoder.py )

deconv_inputs = Input(shape=(lstm_dim,), name='deconv_input')
feature_map_shape = (None, 50, 50, 64) # deconvolve from [batch_size, 50,50,64] => [batch_size, 200,200,3]
x = Dense(feature_map_shape[1] * feature_map_shape[2] * feature_map_shape[3])(deconv_inputs)
x = Reshape((feature_map_shape[1], feature_map_shape[2],feature_map_shape[3]))(x)
for filters in layer_filters[::-1]:
   x = Conv2DTranspose(filters=16,kernel_size=3,strides=2,activation='relu',padding='same')(x)
x = Conv2DTranspose(filters=3,kernel_size=3, padding='same')(x) # last layer has 3 channels
deconv_output = Activation('sigmoid', name='deconv_output')(x)
deconv_model = Model(deconv_inputs, deconv_output, name='deconv_network')

然后,您可以使用TimeDistributed层将此deconv模型应用于LSTM的输出。

# LSTM
lstm_input = Input(shape=(None,16), name='lstm_input') # => [batch_size, timesteps, input_dim]
lstm_outputs =  LSTM(units=64, return_sequences=True)(lstm_input) # => [batch_size, timesteps, output_dim]
predicted_images = TimeDistributed(deconv_model)(lstm_outputs)

model = Model(lstm_input , predicted_images , name='lstm_deconv')
model.summary()