在Keras层中链接权重

时间:2019-08-23 03:57:40

标签: keras

假设我将输入分为两个相等大小的块I1,I2,并且希望在我的keras网络上具有以下结构-I1-> A1,I2-> A2,然后是[A1,A2]-> B, B是输出节点。我可以使用1中的组来做到这一点。但是,我希望I1-> A1的连接权重(和其他激活参数)与I2-> A2的权重相同,即我希望1和2之间具有对称性。 (请注意,[A1,A2]-> B不需要对称。)

1 个答案:

答案 0 :(得分:1)

如果我正确理解了您的问题,则输入1到A_1的映射和输入2到A_2的映射是一个接一个地完成的,因为您希望两个输入的映射函数都相同。在这种情况下,您可以考虑使用RNN,但是如果您的输入彼此独立,则可以考虑使用TimeDistributed wrapper in Keras。下面的示例将获取两个输入,并使用Dense层逐一映射输入,因此Dense的权重是共享的:

from keras.models import Model
from keras.layers import Input, Dense, TimeDistributed, Concatenate, Lambda

x_dim = 5
hidden_dim = 8

x1 = Input(shape=(1,x_dim,))
x2 = Input(shape=(1,x_dim,))

concat = Concatenate(axis=1)([x1, x2])
hidden_concat = TimeDistributed(Dense(hidden_dim))(concat)
hidden1 = Lambda(lambda x: x[:,:1,:])(hidden_concat)
hidden2 = Lambda(lambda x: x[:,1:,:])(hidden_concat)

model = Model(inputs=[x1,x2], outputs=[hidden1, hidden2])
model.summary()

>>>
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_33 (InputLayer)           (None, 1, 5)         0                                            
__________________________________________________________________________________________________
input_34 (InputLayer)           (None, 1, 5)         0                                            
__________________________________________________________________________________________________
concatenate_17 (Concatenate)    (None, 2, 5)         0           input_33[0][0]                   
                                                                 input_34[0][0]                   
__________________________________________________________________________________________________
time_distributed_9 (TimeDistrib (None, 2, 8)         48          concatenate_17[0][0]             
__________________________________________________________________________________________________
lambda_8 (Lambda)               (None, 1, 8)         0           time_distributed_9[0][0]         
__________________________________________________________________________________________________
lambda_9 (Lambda)               (None, 1, 8)         0           time_distributed_9[0][0]         
==================================================================================================
Total params: 48
Trainable params: 48
Non-trainable params: 0