keras自动编码器提供相同的输出

时间:2019-10-17 14:47:18

标签: keras keras-layer autoencoder

我正在测试在Keras中创建的非常简单的自动编码器;但是,对于不同的输入,它始终会为我提供相同的输出。这是我的代码:

def mse(x,y):
    return np.mean((x-y)**2)

# Set random seed
seed = 1
np.random.seed(seed)

# Load data
adj, features = load_loop_data(time_steps=288*10) # 10 days; features has the shape [T, N]
print('features shape: ', features.shape)
# shuffling
np.random.shuffle(features)
# Now devide into the training and test sets. 90% is used for training, the rest is for testing
features_train = features[:288*9, :]
features_train_noise = add_gaussian_noise(features_train, s=10)
features_test = features[288*9:, :]
features_test_noise = add_gaussian_noise(features_test, s=10)
print(features_train.shape)
print(features_train_noise.shape)
print(features_test.shape)
print(features_test_noise.shape)
assert adj.shape[0] == features_train.shape[1], 'shape inconsistency!!!'
n_nodes = n_features = adj.shape[1]

# Network parameters
input_shape = (n_features,)
batch_size = 64
latent_dim = 64
n_epochs = 100

# Build the Autoencoder Model
# First build the Encoder Model
inputs = Input(shape=input_shape, name='encoder_input')
latent = Dense(latent_dim, activation='tanh', name='latent_vector')(inputs)

# Instantiate Encoder Model
encoder = Model(inputs, latent, name='encoder')
encoder.summary()

# Build the Decoder Model
latent_inputs = Input(shape=(latent_dim,), name='decoder_input')
outputs = Dense(n_features)(latent_inputs)

# Instantiate Decoder Model
decoder = Model(latent_inputs, outputs, name='decoder')
decoder.summary()

# Autoencoder = Encoder + Decoder
# Instantiate Autoencoder Model
autoencoder = Model(inputs, decoder(encoder(inputs)), name='autoencoder')
autoencoder.summary()

myoptimizer = keras.optimizers.Adam(lr=0.0001)
autoencoder.compile(loss='mean_squared_error', optimizer=myoptimizer)

# import pdb; pdb.set_trace()
# Train the autoencoder
autoencoder.fit(x=features_train_noise,
                y=features_train,
                validation_data=(features_test_noise, features_test),
                epochs=n_epochs,
                batch_size=batch_size)

# Predict the Autoencoder output from corrupted test images
x_decoded = autoencoder.predict(features_test_noise)
print('features_test: ', features_test)
print('x_decoded: ', x_decoded)
print('test_mse={:.6f}'.format(mse(features_test, x_decoded)))

以下是输出:

features_test:  [[64.06741407 67.31283856 68.42731968 ... 67.3988674  63.06111306
  64.38657689]
 [62.16819592 61.69471294 58.47514973 ... 65.08798509 60.48785799
  65.12542513]
 [54.80038605 48.35916211 21.8033818  ... 66.44045894 62.5013925
  65.92120842]
 ...
 [58.6541824  61.63982039 58.67947493 ... 68.33310833 65.62348312
  63.63177363]
 [63.08518184 67.73249148 67.39465489 ... 65.07520508 62.34224484
  61.30628631]
 [58.11446061 63.35725461 62.54429504 ... 66.46360896 63.41308341
  63.21098821]]
x_decoded:  [[25.06513  24.27289  24.735598 ... 24.184391 24.136896 24.65646 ]
 [25.06513  24.27289  24.735598 ... 24.184391 24.136896 24.65646 ]
 [25.06513  24.27289  24.735598 ... 24.184391 24.136896 24.65646 ]
 ...
 [25.06513  24.27289  24.735598 ... 24.184391 24.136896 24.65646 ]
 [25.06513  24.27289  24.735598 ... 24.184391 24.136896 24.65646 ]
 [25.06513  24.27289  24.735598 ... 24.184391 24.136896 24.65646 ]]
test_mse=1260.581564

我在Stackoverflow上读过类似的问题;仍然,我不知道为什么会发生这种情况。

0 个答案:

没有答案