Keras:获得前n层

时间:2016-11-22 05:55:16

标签: python keras keras-layer

我通过执行以下操作从保存的文件加载自动编码器,显示结构:

autoencoder = load_model("autoencoder_mse1.h5")
autoencoder.summary()
>>> ____________________________________________________________________________________________________
Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
input_8 (InputLayer)             (None, 19)            0                                            
____________________________________________________________________________________________________
dense_43 (Dense)                 (None, 16)            320         input_8[0][0]                    
____________________________________________________________________________________________________
dense_44 (Dense)                 (None, 16)            272         dense_43[0][0]                   
____________________________________________________________________________________________________
dense_45 (Dense)                 (None, 2)             34          dense_44[0][0]                   
____________________________________________________________________________________________________
dense_46 (Dense)                 (None, 16)            48          dense_45[0][0]                   
____________________________________________________________________________________________________
dense_47 (Dense)                 (None, 16)            272         dense_46[0][0]                   
____________________________________________________________________________________________________
dense_48 (Dense)                 (None, 19)            323         dense_47[0][0]                   
====================================================================================================
Total params: 1269
__________________

前四个层(包括InputLayer)构成编码器部分。我想知道是否有快速抓住这四层的方法。到目前为止,我遇到的唯一可行解决方案是:

encoder = Sequential()
encoder.add(Dense(16, 19, weights=autoencoder.layers[1].get_weights()))

^并手动完成另外两个图层。我希望有一种方法能够以更有效的方式提取前四层。特别是因为.summary()方法吐出了图层摘要。

编辑1(可能的解决方案): 我已经达成了以下解决方案,但我希望能提高效率(减少代码)。

encoder = Sequential()
for i,l in enumerate(autoencoder.layers[1:]):
    if i==0:
        encoder.add(Dense(input_dim=data.shape[1],output_dim=l.output_dim,
                          activation="relu",weights=l.get_weights()))
    else:
        encoder.add(Dense(output_dim=l.output_dim,activation="relu",weights=l.get_weights()))
    if l.output_dim == 2:
        break

1 个答案:

答案 0 :(得分:2)

试试这个,让我知道它是否有效:

# TO get first four layers
model.layers[0:3]
#To get the input shape
model.layers[layer_of_interest_index].input_shape
#To get the input shape
model.layers[layer_of_interest_index].output_shape
# TO get weights matrices
model.layers[layer_of_interest_index].get_weights()

希望这有帮助。