在构建编码器/解码器模型之前训练自动编码器是否有效?

时间:2019-03-23 10:23:47

标签: python tensorflow machine-learning keras autoencoder

我正在按照教程https://blog.keras.io/building-autoencoders-in-keras.html来构建我的自动编码器。为此,我有两种策略:

A)步骤1:建立自动编码器;步骤2:建立编码器;第三步:构建解码器;步骤4:编译自动编码器;步骤5:训练自动编码器。

B)步骤1:建立自动编码器;步骤2:编译自动编码器;第三步:训练自动编码器;步骤4:建立编码器;步骤5:建立解码器。

对于这两种情况,模型收敛到损失0.100。但是,如果采用本教程中所述的策略A,则重建效果很差。如果采用策略B,则重建效果会更好。

我认为这是有道理的,因为在策略A中,编码器和解码器模型的权重建立在未经训练的层上,并且结果是随机的。另一方面,在策略B中,训练后我的权重得到了更好的定义,因此重建效果更好。

我的问题是,策略B是否有效,或者我在进行欺诈?对于策略A,由于Keras的模型是基于自动编码器层构建的,因此Keras是否应该自动更新编码器和解码器模型的权重?

###### Code for Strategy A

# Step 1
features = Input(shape=(x_train.shape[1],))

encoded = Dense(1426, activation='relu')(features)
encoded = Dense(732, activation='relu')(encoded)
encoded = Dense(328, activation='relu')(encoded)

encoded = Dense(encoding_dim, activation='relu')(encoded)

decoded = Dense(328, activation='relu')(encoded)
decoded = Dense(732, activation='relu')(decoded)
decoded = Dense(1426, activation='relu')(decoded)
decoded = Dense(x_train.shape[1], activation='relu')(decoded)

autoencoder = Model(inputs=features, outputs=decoded)

# Step 2
encoder = Model(features, encoded)

# Step 3
encoded_input = Input(shape=(encoding_dim,))
decoder_layer = autoencoder.layers[-4](encoded_input)
decoder_layer = autoencoder.layers[-3](decoder_layer)
decoder_layer = autoencoder.layers[-2](decoder_layer)
decoder_layer = autoencoder.layers[-1](decoder_layer)

decoder = Model(encoded_input, decoder_layer)

# Step 4
autoencoder.compile(optimizer='adam', loss='mse')

# Step 5
history = autoencoder.fit(x_train, 
                         x_train,
                         epochs=150,
                         batch_size=256,
                         shuffle=True,
                         verbose=1,
                         validation_split=0.2)

# Testing encoding
encoded_fts = encoder.predict(x_test)
decoded_fts = decoder.predict(encoded_fts)

###### Code for Strategy B

# Step 1
features = Input(shape=(x_train.shape[1],))

encoded = Dense(1426, activation='relu')(features)
encoded = Dense(732, activation='relu')(encoded)
encoded = Dense(328, activation='relu')(encoded)

encoded = Dense(encoding_dim, activation='relu')(encoded)

decoded = Dense(328, activation='relu')(encoded)
decoded = Dense(732, activation='relu')(decoded)
decoded = Dense(1426, activation='relu')(decoded)
decoded = Dense(x_train.shape[1], activation='relu')(decoded)

autoencoder = Model(inputs=features, outputs=decoded)

# Step 2
autoencoder.compile(optimizer='adam', loss='mse')

# Step 3
history = autoencoder.fit(x_train, 
                         x_train,
                         epochs=150,
                         batch_size=256,
                         shuffle=True,
                         verbose=1,
                         validation_split=0.2)
# Step 4
encoder = Model(features, encoded)

# Step 5
encoded_input = Input(shape=(encoding_dim,))
decoder_layer = autoencoder.layers[-4](encoded_input)
decoder_layer = autoencoder.layers[-3](decoder_layer)
decoder_layer = autoencoder.layers[-2](decoder_layer)
decoder_layer = autoencoder.layers[-1](decoder_layer)

decoder = Model(encoded_input, decoder_layer)

# Testing encoding
encoded_fts = encoder.predict(x_test)
decoded_fts = decoder.predict(encoded_fts)

1 个答案:

答案 0 :(得分:2)

  

我的问题是,策略B是否有效,或者我在进行欺诈?

AB是等效的;不,你没有作弊。

  

在采用策略A的情况下,Keras是否应该自动更新编码器和解码器模型的权重,因为它们的模型是基于自动编码器层构建的?

解码器模型仅使用自动编码器层。如果是A

decoder.layers
Out:
[<keras.engine.input_layer.InputLayer at 0x7f8a44d805c0>,
 <keras.layers.core.Dense at 0x7f8a44e58400>,
 <keras.layers.core.Dense at 0x7f8a44e746d8>,
 <keras.layers.core.Dense at 0x7f8a44e14940>,
 <keras.layers.core.Dense at 0x7f8a44e2dba8>]

autoencoder.layers
Out:[<keras.engine.input_layer.InputLayer at 0x7f8a44e91c18>,
 <keras.layers.core.Dense at 0x7f8a44e91c50>,
 <keras.layers.core.Dense at 0x7f8a44e91ef0>,
 <keras.layers.core.Dense at 0x7f8a44e89080>,
 <keras.layers.core.Dense at 0x7f8a44e89da0>,
 <keras.layers.core.Dense at 0x7f8a44e58400>,
 <keras.layers.core.Dense at 0x7f8a44e746d8>,
 <keras.layers.core.Dense at 0x7f8a44e14940>,
 <keras.layers.core.Dense at 0x7f8a44e2dba8>]
每个列表的最后4行的

十六进制数字(对象ID)都是相同的-因为它们是相同的对象。当然,他们也分享自己的重量。

如果B

decoder.layers
Out:
[<keras.engine.input_layer.InputLayer at 0x7f8a41de05f8>,
 <keras.layers.core.Dense at 0x7f8a41ee4828>,
 <keras.layers.core.Dense at 0x7f8a41eaceb8>,
 <keras.layers.core.Dense at 0x7f8a41e50ac8>,
 <keras.layers.core.Dense at 0x7f8a41e5d780>]

autoencoder.layers
Out:
[<keras.engine.input_layer.InputLayer at 0x7f8a41da3940>,
 <keras.layers.core.Dense at 0x7f8a41da3978>,
 <keras.layers.core.Dense at 0x7f8a41da3a90>,
 <keras.layers.core.Dense at 0x7f8a41da3b70>,
 <keras.layers.core.Dense at 0x7f8a44720cf8>,
 <keras.layers.core.Dense at 0x7f8a41ee4828>,
 <keras.layers.core.Dense at 0x7f8a41eaceb8>,
 <keras.layers.core.Dense at 0x7f8a41e50ac8>,
 <keras.layers.core.Dense at 0x7f8a41e5d780>]

-层相同。

因此,AB的培训顺序是等效的。更笼统地说,如果共享层(以及权重),则在大多数情况下,构建,编译和训练的顺序无关紧要,因为它们在同一张量流图中。

我在mnist数据集上运行了这些示例,它们显示了相同的性能并很好地重建了图像。我想,如果您遇到案例A的麻烦,您就错过了其他事情(因为我复制粘贴了您的代码并且一切都OK,所以我想不上)。

如果使用jupyter,有时会重新启动并运行自上而下的帮助。