我研究了如何在深度强化学习中使用转移学习。
我想通过转移学习在我的项目中使用预训练的模型(h5f。文件)。我有图像输入和标量输入。图像是卷积神经网络(CNN)的输入。
我还尝试从预训练模型中加载权重,并尝试确定哪些层可以训练。
dqn.load_weights('checkpoint_reward_176.h5f')
for i in range(4):
model.layers[1].trainable = False
for i in range(4,8):
model.layers[i].trainable = True
总而言之,如何将图层传输到未经训练的图层。在这种情况下可以使用转移学习吗?
我感谢所有答案,非常感谢。
这是DQN代码。
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
env = gym.make(args.env_name)
np.random.seed(123)
env.seed(123)
nb_actions = env.action_space.n
img_shape = env.simage.shape
vel_shape = env.svelocity.shape
dst_shape = env.sdistance.shape
geo_shape = env.sgeofence.shape
AE_shape = env.sAE.shape
img_kshape = (1,) + img_shape
#Sequential model for convolutional layers applied to image
image_model = Sequential()
image_model.add(Conv2D(32, (4, 4), strides=(4, 4) ,activation='relu', input_shape=img_kshape, data_format = "channels_first"))
image_model.add(Conv2D(64, (3, 3), strides=(2, 2), activation='relu'))
image_model.add(Flatten())
print(image_model.summary())
#Input and output of the Sequential model
image_input = Input(img_kshape)
encoded_image = image_model(image_input)
#Inputs and reshaped tensors for concatenate after with the image
velocity_input = Input((1,) + vel_shape)
distance_input = Input((1,) + dst_shape)
geofence_input = Input((1,) + geo_shape)
vel = Reshape(vel_shape)(velocity_input)
dst = Reshape(dst_shape)(distance_input)
geo = Reshape(geo_shape)(geofence_input)
AE_input = Input((1,) + AE_shape)
ae=Reshape(AE_shape)(AE_input)#Concatenation of image, position, distance and geofence values.
#3 dense layers of 256 units
denses = concatenate([encoded_image, vel, dst, geo, ae])
denses = Dense(256, activation='relu')(denses)
denses = Dense(256, activation='relu')(denses)
denses = Dense(256, activation='relu')(denses)
#Last dense layer with nb_actions for the output
predictions = Dense(nb_actions, kernel_initializer='zeros', activation='linear')(denses)
model = Model(
inputs=[image_input, velocity_input, distance_input, geofence_input, AE_input],
outputs=predictions
)
print(model.summary())
train = True
memory = SequentialMemory(limit=100000, window_length=1)
processor = MultiInputProcessor(nb_inputs=5)
policy = LinearAnnealedPolicy(EpsGreedyQPolicy(), attr='eps', value_max=1., value_min=.1, value_test=0.0,
nb_steps=50000)
dqn = DQNAgent(model=model, processor=processor, nb_actions=nb_actions, memory=memory, nb_steps_warmup=50,
enable_double_dqn=True,
enable_dueling_network=False, dueling_type='avg',
target_model_update=1e-2, policy=policy, gamma=.99)
dqn.compile(Adam(lr=0.00025), metrics=['mae'])'
DQN代码更新:
# Obtaining shapes from Gym environment
img_shape = env.simage.shape
vel_shape = env.svelocity.shape
dst_shape = env.sdistance.shape
geo_shape = env.sgeofence.shape
AE_shape = env.sAE.shape
# Keras-rl interprets an extra dimension at axis=0
# added on to our observations, so we need to take it into account
img_kshape = (1,) + img_shape
input_layer = Input(shape=img_kshape)
conv1 = Conv2D(32, (4, 4), strides=(4, 4), activation='relu', input_shape=img_kshape, name='conv1',
data_format="channels_first")(input_layer)
conv2 = Conv2D(64, (3, 3), strides=(2, 2), activation='relu', name='conv2')(conv1)
flat1 = Flatten(name='flat1')(conv2)
auxiliary_input1 = Input(vel_shape, name='vel')
auxiliary_input2 = Input(dst_shape, name='dst')
auxiliary_input3 = Input(geo_shape, name='geo')
auxiliary_input4 = Input(AE_shape, name='ae')
denses = concatenate([flat1, auxiliary_input1, auxiliary_input2, auxiliary_input3, auxiliary_input4])
denses = Dense(256, activation='relu')(denses)
denses = Dense(256, activation='relu')(denses)
denses = Dense(256, activation='relu')(denses)
predictions = Dense(nb_actions, kernel_initializer='zeros', activation='linear')(denses)
model = Model(inputs=[input_layer, auxiliary_input1, auxiliary_input2, auxiliary_input3, auxiliary_input4],
outputs=predictions)
print(model.summary())
答案 0 :(得分:0)
我相信您应该使用keras功能API来构建神经网络并连接这两个部分。因此,而不是代码中的以下部分,
#Sequential model for convolutional layers applied to image
image_model = Sequential()
image_model.add(Conv2D(32, (4, 4), strides=(4, 4) ,activation='relu', input_shape=img_kshape, data_format = "channels_first"))
image_model.add(Conv2D(64, (3, 3), strides=(2, 2), activation='relu'))
image_model.add(Flatten())
使用以下使用keras功能API的代码段。
input_layer = Input(shape=img_kshape)
conv1 = Conv2D(32, (4, 4), strides=(4, 4) ,activation='relu', input_shape=img_kshape, name='conv1', data_format = "channels_first")(input_layer)
conv2 = Conv2D(64, (3, 3), strides=(2, 2), activation='relu', name='conv2')(conv1)
flat1 = Flatten(name='flat1')(conv2)
然后,您可以定义一个辅助输入层,以输入所有那些vel
,dst
,geo
张量(使用适当的形状-为方便起见,我给出了5个)。最后,连接图层并构建模型(因此,请使用以下代码段,而不要使用“#3 256个单位的密集层”代码段)。
auxiliary_input = Input(shape=(5,), name='aux_input')
denses1 = concatenate([flat1, auxiliary_input])
denses2 = Dense(256, activation='relu')(denses1)
denses3 = Dense(256, activation='relu')(denses2)
denses4 = Dense(256, activation='relu')(denses3)
model = Model(inputs=[input_layer,auxiliary_input], outputs=denses4)
print (model.summary())
将会产生,
__________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 1, 96, 96) 0
__________________________________________________________________________________________________
conv1 (Conv2D) (None, 32, 24, 24) 544 input_1[0][0]
__________________________________________________________________________________________________
conv2 (Conv2D) (None, 15, 11, 64) 13888 conv1[0][0]
__________________________________________________________________________________________________
flat1 (Flatten) (None, 10560) 0 conv2[0][0]
__________________________________________________________________________________________________
aux_input (InputLayer) (None, 5) 0
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 10565) 0 flat1[0][0]
aux_input[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 256) 2704896 concatenate_1[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 256) 65792 dense_1[0][0]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 256) 65792 dense_2[0][0]
==================================================================================================
Total params: 2,850,912
Trainable params: 2,850,912
Non-trainable params: 0
__________________________________________________________________________________________________
经过培训后,您现在可以像在原始帖子中一样冻结一些图层,并按如下所示将图层导入到不可训练的图层中。
conv1_weights = model.get_layer('conv1').get_weights()
如果conv1不可训练,则按以下方式分配加载的权重。
conv1.set_weights(conv1_weights)
我已经在没有minimum reproducible example的情况下处理了您的问题,因此,请让我知道任何错误。