加载权重后Keras模型似乎未经训练

时间:2019-01-13 10:56:29

标签: keras

我正在尝试保存和恢复Keras中给定模型的权重。 我使用model.save_weights(filepath,...)成功保存了权重,并且权重也已实际加载。在保存和还原之后,将model.get_weights()保存到文件中,然后将通过这种方式收到的文件进行比较,即可确认这一点。

但是我的模型和开始时一样糟糕。我有什么想念吗?

def __init__(self, **args):
        # Next, we build our model. We use the same model that was described by Mnih et al. (2015).
        self.model.add(Convolution2D(32, (3, 3), strides=(1, 1)))
        self.model.add(Activation('relu'))
        self.model.add(Convolution2D(64, (3, 3), strides=(1, 1)))
        self.model.add(Activation('relu'))
        self.model.add(Convolution2D(64, (3, 3), strides=(1, 1)))
        self.model.add(Activation('relu'))
        self.model.add(Flatten())
        self.model.add(Dense(512))
        self.model.add(Activation('relu'))
        self.model.add(Dense(self.nb_actions)) #nb_actions))
        self.model.add(Activation('linear'))
        print(self.model.summary())

        if os.path.isfile("/home/abcd/model.weights"):
            self.model.load_weights("/home/abcd/model.weights")
        self.compile(Adam(lr=.00025), metrics=['mae'])

...

def compile(self, optimizer, metrics=[]):
    metrics += [mean_q]  # register default metrics

    # We never train the target model, hence we can set the optimizer and loss arbitrarily.
    self.target_model = clone_model(self.model, self.custom_model_objects)

    if os.path.isfile("/home/abcd/target_model.weights"):
        self.target_model.load_weights("/home/abcd/target_model.weights")

    self.target_model.compile(optimizer='sgd', loss='mse')
    self.model.compile(optimizer='sgd', loss='mse')

    # Compile model.
    if self.target_model_update < 1.:
        # We use the `AdditionalUpdatesOptimizer` to efficiently soft-update the target model.
        updates = get_soft_target_model_updates(self.target_model, self.model, self.target_model_update)
        optimizer = AdditionalUpdatesOptimizer(optimizer, updates)

    def clipped_masked_error(args):
        y_true, y_pred, mask = args
        loss = huber_loss(y_true, y_pred, self.delta_clip)
        loss *= mask  # apply element-wise mask
        return K.sum(loss, axis=-1)

    # Create trainable model. The problem is that we need to mask the output since we only
    # ever want to update the Q values for a certain action. The way we achieve this is by
    # using a custom Lambda layer that computes the loss. This gives us the necessary flexibility
    # to mask out certain parameters by passing in multiple inputs to the Lambda layer.
    y_pred = self.model.output
    y_true = Input(name='y_true', shape=(self.nb_actions,))
    mask = Input(name='mask', shape=(self.nb_actions,))
    loss_out = Lambda(clipped_masked_error, output_shape=(1,), name='loss')([y_true, y_pred, mask])
    ins = [self.model.input] if type(self.model.input) is not list else self.model.input
    trainable_model = Model(inputs=ins + [y_true, mask], outputs=[loss_out, y_pred])
    assert len(trainable_model.output_names) == 2
    combined_metrics = {trainable_model.output_names[1]: metrics}
    losses = [
        lambda y_true, y_pred: y_pred,  # loss is computed in Lambda layer
        lambda y_true, y_pred: K.zeros_like(y_pred),  # we only include this for the metrics
    ]

    if os.path.isfile("/home/abcd/trainable_model.weights"):
        trainable_model.load_weights("/home/abcd/trainable_model.weights")

    trainable_model.compile(optimizer=optimizer, loss=losses, metrics=combined_metrics)
    self.trainable_model = trainable_model

    self.compiled = True

...

def final(self, state):
    "Called at the end of each game."
    # call the super-class final method
    PacmanQAgent.final(self, state)

    # did we finish training?
    if self.episodesSoFar == self.numTraining:
        # you might want to print your weights here for debugging
        "*** YOUR CODE HERE ***"
        self.training = False

        # Save the model
        self.model.save_weights("/home/abcd/model.weights", True)
        self.trainable_model.save_weights("/home/abcd/trainable_model.weights", True)
        self.target_model.save_weights("/home/abcd/target_model.weights", True)

1 个答案:

答案 0 :(得分:0)

我发现了问题。实际上,保存和加载工作正常。我当时使用的是“退火的Epsilon贪婪策略”,因此,每次我开始训练时,它一开始基本上只会随机地执行一些随机步骤。 另外,我的测试代码是错误的,因此测试未达到预期的目的。两者结合起来使人觉得它实际上会学到一些东西(训练进行得很好),但没有节省重量(测试有误,下一次训练是从“随机”开始的。)