model = Sequential()
model.add(TimeDistributed(Conv2D(32, (3, 3), padding='same'),
input_shape=(100, 6, 5,1)))
model.add(TimeDistributed(Activation('relu')))
model.add(TimeDistributed(Conv2D(32, (3, 3))))
model.add(TimeDistributed(Activation('relu')))
model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2))))
model.add(TimeDistributed(Dropout(0.25)))
model.add(TimeDistributed(Flatten()))
model.add(TimeDistributed(Dense(512)))
model.add(TimeDistributed(Dense(35, name="first_dense_flow" )))
model.add(LSTM(20, return_sequences=True, name="lstm_layer_flow"));
model.add(TimeDistributed(Dense(101), name=" time_distr_dense_one_ flow "))
model.add(GlobalAveragePooling1D(name="global_avg_flow"))
model.compile(loss='mae', optimizer='adam', metrics=['accuracy'])
model.fit(train_input,train_output,epochs=50,batch_size=60)
我得到了ValueError:检查输入时出错:期望的time_distributed_38_input具有5个维度,但数组的形状为(13974,100,6,5)
我需要根据(100,6,5)来预测(1,6,5),其中100是时间戳。
请更正模型中所需的任何更改
答案 0 :(得分:0)
请参阅重写的代码段:
model.add(Conv2D(32, (3, 3), padding='same'), input_shape=(100, 6, 5))
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(TimeDistributed(Dense(512)))
model.add(TimeDistributed(Dense(35, name="first_dense_flow" )))
model.add(LSTM(20, return_sequences=True, name="lstm_layer_flow"));
model.add(TimeDistributed(Dense(101), name=" time_distr_dense_one_ flow "))
model.add(GlobalAveragePooling1D(name="global_avg_flow"))
1
放到input_shape
中; input_shape
是batch_shape
,没有批次尺寸-因此,您将有batch_shape = (60, 100, 6, 5)
(自batch_size=60
起)(101, 6, 5)
,但您的input_shape
为(100, 6, 5)
-这可能并不总是有效;在馈送到模型之前设置input_shape=(101, 6, 5)
或切片数据(例如x[:, :100, :, :]
)Conv2D
不需要TimeDistributed
,因为您输入的是3D(减去批次暗淡)Activation
,MaxPooling2D
和Dropout
自动根据其输入尺寸进行缩放,因此TimeDistributed
是多余的Flatten
将消除timesteps
-channels
的关系,但这可能不是声音的补充-但是,如果您仍要使用它,则需要摆脱{{ 1}}之后的两个TimeDistributed
,然后是Dense
,然后再喂给Reshape