我试图使用带有TensorFlow后端(1.2.1)的Keras(2.0.6)来掩盖Convolutional LSTM层中的丢失数据:
import keras
from keras.models import Sequential
from keras.layers import Masking, ConvLSTM2D
n_timesteps = 10
n_width = 64
n_height = 64
n_channels = 1
model = Sequential()
model.add(Masking(mask_value = 0., input_shape = (n_timesteps, n_width, n_height, n_channels)))
model.add(ConvLSTM2D(filters = 64, kernel_size = (3, 3)))
但是我得到以下ValueError:
ValueError: Shape must be rank 4 but is rank 2 for 'conv_lst_m2d_1/while/Tile' (op: 'Tile') with input shapes: [?,64,64,1], [2].
如何在ConvLSTM2D层使用遮罩?
答案 0 :(得分:0)
当将Masking层应用于任意张量(例如图像)时,Keras似乎有问题。我可以找到一种(非常不理想的)解决方法,其中涉及在应用遮罩层之前将输入变平。因此,请考虑原始模型:
model = Sequential()
model.add(Masking(mask_value=0., input_shape=(n_timesteps, n_width, n_height, n_channels)))
model.add(ConvLSTM2D(filters=64, kernel_size=(3, 3)))
我们可以对其进行如下修改:
input_shape = (n_timesteps, n_width, n_height, n_channels)
model = Sequential()
model.add(TimeDistributed(Flatten(), input_shape=input_shape))
model.add(TimeDistributed(Masking(mask_value=0.)))
model.add(TimeDistributed(Reshape(input_shape[1:])))
model.add(ConvLSTM2D(filters=64, kernel_size=(3, 3)))
此解决方案在图形中增加了额外的计算,但没有其他参数。