我正在研究一种使用强化学习的赛车游戏。为了训练模型,我在实现神经网络时面临一个问题。我发现了一些使用CNN的示例。但是似乎增加额外的LSTM层将提高模型效率。我找到了以下示例。
https://team.inria.fr/rits/files/2018/02/ICRA18_EndToEndDriving_CameraReady.pdf
The network I need to implement
问题是我不确定如何在此处实现LSTM层。如何为LSTM层提供以下输入
这是我当前正在使用的代码。我想在Conv2D之后添加LSTM层。
self.__nb_actions = 28
self.__gamma = 0.99
#Define the model
activation = 'relu'
pic_input = Input(shape=(59,255,3))
img_stack = Conv2D(16, (3, 3), name='convolution0', padding='same', activation=activation, trainable=train_conv_layers)(pic_input)
img_stack = MaxPooling2D(pool_size=(2,2))(img_stack)
img_stack = Conv2D(32, (3, 3), activation=activation, padding='same', name='convolution1', trainable=train_conv_layers)(img_stack)
img_stack = MaxPooling2D(pool_size=(2, 2))(img_stack)
img_stack = Conv2D(32, (3, 3), activation=activation, padding='same', name='convolution2', trainable=train_conv_layers)(img_stack)
img_stack = MaxPooling2D(pool_size=(2, 2))(img_stack)
img_stack = Flatten()(img_stack)
img_stack = Dropout(0.2)(img_stack)
img_stack = Dense(128, name='rl_dense', kernel_initializer=random_normal(stddev=0.01))(img_stack)
img_stack=Dropout(0.2)(img_stack)
output = Dense(self.__nb_actions, name='rl_output', kernel_initializer=random_normal(stddev=0.01))(img_stack)
opt = Adam()
self.__action_model = Model(inputs=[pic_input], outputs=output)
self.__action_model.compile(optimizer=opt, loss='mean_squared_error')
self.__action_model.summary()
谢谢
答案 0 :(得分:0)
有多种方法可以执行此操作,首先,对conv
输出的输出进行整形并将其馈送到lstm
层。这是使用各种方法Shaping data for LSTM, and feeding output of dense layers to LSTM