我正在构建一个自动编码器,并且想知道为什么在500次迭代后损耗没有收敛到零。因此,我创建了这种“说明性”自动编码器,其编码尺寸等于输入尺寸。 为了确保数据没有问题,我创建了一个形状为(30000,100)的随机数组样本,并将其作为输入和输出(x = y)。 NN只能学习保持输入不变。那么为什么它没有达到零损失呢?
# this is the size of our encoded representations
encoding_dim = 100
inputs = Input(shape=x_rand.shape[1:])
encoded = Dense(100, activation='relu')(inputs)
encoded = Dense(100, activation='relu')(encoded)
encoded = Dense(encoding_dim, activation='relu')(encoded)
decoded = Dense(100, activation='relu')(encoded)
decoded = Dense(100, activation='relu')(decoded)
decoded = Dense(x_rand.shape[-1], activation='sigmoid')(decoded)
# this model maps an input to its reconstruction
autoencoder = Model(inputs, decoded)
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
history = autoencoder.fit(x_rand, x_rand, epochs=EPOCHS, batch_size=BATCH_SIZE, verbose=2)