与keras中的相同代码相比,为什么带有Tensorflow 2.0的自动编码器的性能非常差?

时间:2019-11-18 16:48:44

标签: python tensorflow keras autoencoder

我正在训练自动编码器的mnist数据 使用keras时,验证损失很大(从0.2687到.1) 虽然使用tensorflow(版本2.0).keras验证损失停留在(.6),即使我使用的是相同的代码。

下面是keras(you can test it in colab)的代码 然后是带有tf.keras(you can test it in colab)的代码

from keras.layers import Input, Dense
from keras.models import Model, Sequential
from keras.datasets import mnist
import numpy as np

#Import the MNIST data, only take the images since we don't need the targets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
#Normalize and reshape images to vectors
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
print (x_train.shape)
print (x_test.shape)


input = Input(shape=(784,))
y = Dense(64, activation='relu')(input)
z = Dense(784, activation='sigmoid')(y)
ae = Model(input, z)
encoder = Model(input, y)
input_decoder = Input(shape = (64,))
decoder_layer = ae.layers[-1]
decoder = Model(input_decoder, decoder_layer(input_decoder))
ae.compile(optimizer='adadelta', loss = 'binary_crossentropy')
ae.fit(x_train, x_train, epochs = 50, batch_size=256, shuffle=False, validation_data=(x_test, x_test))

60000/60000 [==============================]-5秒88us / step-损耗:0.3494-val_loss: 0.2688 时代2/50 60000/60000 [==============================]-4s 74us / step-损耗:0.2578-val_loss:0.2445 时代3/50

from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.datasets import mnist
import numpy as np

#Import the MNIST data, only take the images since we don't need the targets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
#Normalize and reshape images to vectors
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
print (x_train.shape)
print (x_test.shape)


input = Input(shape=(784,))
y = Dense(64, activation='relu')(input)
z = Dense(784, activation='sigmoid')(y)
ae = Model(input, z)
encoder = Model(input, y)
input_decoder = Input(shape = (64,))
decoder_layer = ae.layers[-1]
decoder = Model(input_decoder, decoder_layer(input_decoder))
ae.compile(optimizer='adadelta', loss = 'binary_crossentropy')
ae.fit(x_train, x_train, epochs = 50, batch_size=256, shuffle=False, validation_data=(x_test, x_test))

训练60000个样本,验证10000个样本 时代1/50 60000/60000 [=============================]-4s 59us / sample-损失:0.6941-val_loss:0.6939 时代2/50 60000/60000 [=============================]-3s 47us / sample-损耗:0.6937-val_loss:0.6936

0 个答案:

没有答案