TFLearn Autoencoder分配所有内存

时间:2017-10-18 17:32:59

标签: python tensorflow tflearn

我尝试使用TF-Learn构建一个简单的自动编码器。训练图像的分辨率为150 x 150像素(和3个通道),我在h5df文件中使用TFLearn进行转换。问题是网络立即分配所有~16 GB的可用内存。

这是我的代码:

h5_file = h5py.File(os.path.join(data_folder, 'dataset150-150.h5'), 'r')
X = h5_file['X']
Y = h5_file['X']

batch_size = 8

# Building the encoder
encoder = tflearn.input_data(shape=[batch_size, 150, 150, 3], name='input')
# Flatten the input layer
encoder = tflearn.reshape(encoder, new_shape=[batch_size, 67500])
encoder = tflearn.fully_connected(encoder, 67500)
encoder = tflearn.fully_connected(encoder, 512)
hidden = tflearn.fully_connected(encoder, 16)
decoder = tflearn.fully_connected(hidden, 512)
decoder = tflearn.fully_connected(decoder, 67500)
# Reshape the input layer to image shape
decoder = tf.reshape(decoder, [batch_size,150,150,3])

# Regression, with mean square error
net = tflearn.regression(decoder, optimizer='adam', learning_rate=0.001, loss='mean_square', metric=None)

# Training the auto encoder
model = tflearn.DNN(net, tensorboard_verbose=3, tensorboard_dir="./AutoEncoder")
model.fit(X, Y, batch_size=batch_size)

也许有人看到我的错误?提前谢谢。

0 个答案:

没有答案