如何将Keras合并层用于具有两个输出的自动编码器

时间:2018-09-21 01:10:33

标签: python keras deep-learning keras-layer autoencoder

假设我有两个输入:In []: [[x, y] for x, y in it.combinations(L, r=2) if set(x).issubset(y) and len(set(y)-set(x)) <= 1] Out[]: [[[(1, 3)], [(1, 3), (2, 4)]], [[(1, 3)], [(1, 3), (1, 4)]], [[(1, 3)], [(1, 2), (1, 3)]], [[(1, 3), (2, 4)], [(1, 3), (2, 4), (1, 2)]], [[(1, 2)], [(1, 2), (1, 3)]], [[(1, 2), (1, 3)], [(1, 3), (2, 4), (1, 2)]]] X,并且我想设计并联合自动编码器来重构YX'

如图所示,Y'是音频输入,X是视频输入。这种深度架构很酷,因为它具有两个输入和两个输出。而且,它们在中间共享某个层。我的问题是如何使用Y编写此自动编码器。假设除了中间的共享层之外,每个层都是完全连接的。

这是我的代码,如下所示:

Keras

我只是中间有 from keras.layers import Input, Dense from keras.models import Model import numpy as np X = np.random.random((1000, 100)) y = np.random.random((1000, 300)) # x and y can be different size # the X autoencoder layer Xinput = Input(shape=(100,)) encoded = Dense(50, activation='relu')(Xinput) encoded = Dense(20, activation='relu')(encoded) encoded = Dense(15, activation='relu')(encoded) decoded = Dense(20, activation='relu')(encoded) decoded = Dense(50, activation='relu')(decoded) decoded = Dense(100, activation='relu')(decoded) # the Y autoencoder layer Yinput = Input(shape=(300,)) encoded = Dense(120, activation='relu')(Yinput) encoded = Dense(50, activation='relu')(encoded) encoded = Dense(15, activation='relu')(encoded) decoded = Dense(50, activation='relu')(encoded) decoded = Dense(120, activation='relu')(decoded) decoded = Dense(300, activation='relu')(decoded) 15的{​​{1}}个节点。 我的问题是如何使用损失函数X来训练这种联合自动编码器?

谢谢

2 个答案:

答案 0 :(得分:1)

您的代码使用方式有两个不同的模型。虽然您可以简单地将共享表示层的输出用于以下两个子网的两次,但是您必须合并两个子网作为输入:

Xinput = Input(shape=(100,))
Yinput = Input(shape=(300,))

Xencoded = Dense(50, activation='relu')(Xinput)
Xencoded = Dense(20, activation='relu')(Xencoded)


Yencoded = Dense(120, activation='relu')(Yinput)
Yencoded = Dense(50, activation='relu')(Yencoded)

shared_input = Concatenate()([Xencoded, Yencoded])
shared_output = Dense(15, activation='relu')(shared_input)

Xdecoded = Dense(20, activation='relu')(shared_output)
Xdecoded = Dense(50, activation='relu')(Xdecoded)
Xdecoded = Dense(100, activation='relu')(Xdecoded)

Ydecoded = Dense(50, activation='relu')(shared_output)
Ydecoded = Dense(120, activation='relu')(Ydecoded)
Ydecoded = Dense(300, activation='relu')(Ydecoded)

现在您有两个单独的输出。因此,您需要两个单独的损失函数,无论如何都要添加这些函数,以便编译模型:

model = Model([Xinput, Yinput], [Xdecoded, Ydecoded])
model.compile(optimizer='adam', loss=['mse', 'mse'], loss_weights=[1., 1.])

您可以通过以下方式简单地训练模型:

model.fit([X_input, Y_input], [X_label, Y_label])

答案 1 :(得分:0)

让我澄清一些事情,您想要在一个模型中同时包含两个输入层和两个共享层的输出层,对吗?

我认为这可以为您提供一个想法:

from keras.layers import Input, Dense, Concatenate
from keras.models import Model
import numpy as np

X = np.random.random((1000, 100))
y = np.random.random((1000, 300))  # x and y can be different size

# the X autoencoder layer 
Xinput = Input(shape=(100,))

encoded_x = Dense(50, activation='relu')(Xinput)
encoded_x = Dense(20, activation='relu')(encoded_x)

# the Y autoencoder layer 
Yinput = Input(shape=(300,))

encoded_y = Dense(120, activation='relu')(Yinput)
encoded_y = Dense(50, activation='relu')(encoded_y)

# concatenate encoding layers
c_encoded = Concatenate(name="concat", axis=1)([encoded_x, encoded_y])
encoded = Dense(15, activation='relu')(c_encoded)

decoded_x = Dense(20, activation='relu')(encoded)
decoded_x = Dense(50, activation='relu')(decoded_x)
decoded_x = Dense(100, activation='relu')(decoded_x)

out_x = SomeOuputLayers(..)(decoded_x)

decoded_y = Dense(50, activation='relu')(encoded)
decoded_y = Dense(120, activation='relu')(decoded_y)
decoded_y = Dense(300, activation='relu')(decoded_y)

out_y = SomeOuputLayers(..)(decoded_y)

# Now you have two input and two output with shared layer
model = Model([Xinput, Yinput], [out_x, out_y])
相关问题