def create_keras_model():
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.001)),
Dropout(0.5),
Dense(1, activation='sigmoid')
])
model.load_weights('/content/drive/My Drive/localmodel/weights')
return model
在Colab中尝试过类似的操作,但我得到errno 21,它是一个目录。
然后我尝试了另一种方法,如下所示,
tff_model = create_keras_model() #now this function doesnt load weights, just returns a Sequential model
tff.learning.assign_weights_to_keras_model(tff_model, model_with_weights)
就像assign_weights_to_keras_model()将权重从tff_model转移到keras模型一样,我想将权重从keras模型转移到tff_model。该怎么办?
答案 0 :(得分:3)
此处model_with_weights
必须是代表模型权重的TFF值,例如:
def model_fn():
keras_model = create_keras_model()
return tff.learning.from_keras_model(keras_model)
fed_avg = tff.learning.build_federated_averaging_process(model_fn, ...)
state = fed_avg.initialize()
state = fed_avg.next(state, ...)
...
tff.learning.assign_weights_to_keras_model(keras_model, state.model)
答案 1 :(得分:1)
我只是知道如何做到这一点。 这个想法是使用:
tff.learning.state_with_new_model_weights(state, trainable_weights_numpy, non_trainable_weights_numpy)
可训练的权重是从基准模型中获取的,并转换为numpy格式。
trainable_weights = []
for weights in baseline_model.trainable_weights:
trainable_weights.append(weights.numpy())
当服务器具有部分数据而客户端具有相似的数据时,这特别有用。可能这可以用于迁移学习。