我正在Keras中写自动编码器:
inputs = Input((n_channels,))
l1 = Dense(40, activation="relu")(inputs)
l2 = Dense(19)(l1)
l3 = Dense(40, activation="relu")(l2)
training_layer = Dense(n_channels)(l3)
unify_layer = Model(inputs=inputs, outputs=l2)
training_layer = Model(inputs=inputs, outputs=training_layer)
我使用training_layer
进行培训,使用unify_layer
进行预测,因此当我保存后继续学习时,我希望能够访问这两个端点。
[由于Marcin的评论编辑] Model.save
允许我只保存一个模型。我打电话的时候:
unify_layer.save("unify")
training_layer.save("training")
然后
unify_layer = load_model("unify")
training_layer = load_model("training")
两层不再联系,即当我训练training_layer
时,unify_layer
未受训练。
答案 0 :(得分:3)
哦,我实际上可以使用save_weights
和load_weights
方法:
class Autoencoder():
def __init__(self):
inputs = Input((n_channels,))
l1 = Dense(40, activation="relu")(inputs)
l2 = Dense(19)(l1)
l3 = Dense(40, activation="relu")(l2)
training_layer = Dense(n_channels)(l3)
self.unify_layer = Model(inputs=inputs, outputs=l2)
self.training_layer = Model(inputs=inputs, outputs=training_layer)
def save(self, filename):
self.unify_layer.save_weights("unify_" + filename)
self.training_layer.save_weights("training_" + filename)
def load(self, filename):
self.unify_layer.load_weights("unify_" + filename)
self.training_layer.load_weights("training_" + filename)