喀拉拉邦贪婪的分层培训?

时间:2020-08-15 08:41:47

标签: python machine-learning keras sequential

我正在尝试使用Model API在keras中进行分层培训。

我想先将模型中的所有层定义并保持为不可训练,然后再将它们逐一训练,同时将训练后的层又恢复为不可训练。

这可能吗?

inp = Input(shape= (max_len,))
    embedded = Embedding(21,100, mask_zero = True, trainable = False)(inp)
    lstm = LSTM(10, return_sequences = True, trainable = False )(embedded)
    lstm = LSTM(10, return_sequences = True , trainable = False)(lstm)
    lstm = LSTM(5, return_sequences = False , trainable = False)(lstm)
    output = Dense(1, activation = "sigmoid", trainable = True)(lstm)
    model = Model(inputs = inp, outputs = output)
    model.compile(loss = "binary_crossentropy", optimizer = "adam", metrics = ["acc"])
    print(model.summary())
    
    for i in range(1, len(model.layers)):
      [print(i.trainable, i.name) for i in model.layers]
      model.fit(x_train , y_train, validation_data = (x_test, y_test), epochs = 100)
      model.layers[i-1].trainable = False
      model.layers[i].trainable = True

0 个答案:

没有答案