Keras内存不足,批处理量很小

时间:2019-01-29 22:03:59

标签: python tensorflow keras

我仅使用张量流库构建了一个自动编码器,其网络形状为:

_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
input_1 (InputLayer)         (None, 168, 120, 3)       0
_________________________________________________________________
flatten_1 (Flatten)          (None, 60480)             0
_________________________________________________________________
dense_1 (Dense)              (None, 1024)              61932544
_________________________________________________________________
dense_2 (Dense)              (None, 256)               262400
_________________________________________________________________
dense_3 (Dense)              (None, 1024)              263168
_________________________________________________________________
dense_4 (Dense)              (None, 60480)             61992000
_________________________________________________________________
reshape_1 (Reshape)          (None, 168, 120, 3)       0
=================================================================
Total params: 124,450,112
Trainable params: 124,450,112
Non-trainable params: 0
_________________________________________________________________

在仅使用tensorflow的项目中,我能够毫无问题地使用我的GPU进行训练,批处理大小为128。我想仅使用keras来重新创建自动编码器,即使遇到以下情况,我也遇到了内存不足的异常批量大小为一个。通过研究该问题,我发现解决此问题的最佳方法是减小批处理大小,但是我无法进一步减小它。我的机器上有2个以SLI运行的GTX 970卡(CUDA不在乎SLI),总共有8GB的内存。即使我能够使用tensorflow训练批处理大小的64倍,为什么我仍不能用keras训练该网络?

以下是相关代码:

常量:

# Constants

WIDTH = 120
HEIGHT = 168
CHANNELS = 3
NUM_INPUTS = WIDTH*HEIGHT*CHANNELS
BATCH_SIZE=1
NUM_SAMPLES=5000
VALIDATION_SIZE=1
VALIDATION_SAMPLES=100
EPOCHS=1000

HIDDEN_WIDTH = 1024
ENCODING_WIDTH = 256

INPUT_PATH = './input/'
VALIDATION_PATH = './validation/'
MODEL_PATH = './model/'

MODEL_FILE = 'my_model.h5'
EPOCH_FILE = 'initial_epoch.txt'  

初始化并保存:

# this is our input placeholder
input_img = Input(shape=(constants.HEIGHT,constants.WIDTH,constants.CHANNELS))
# flatten image into one dimension
flatten = Flatten()(input_img)
# hidden layer 1
hidden = Dense(constants.HIDDEN_WIDTH, activation='relu')(flatten)
# "encoded" is the encoded representation of the input
encoded = Dense(constants.ENCODING_WIDTH, activation='relu')(hidden)
# hidden layer 3
hidden = Dense(constants.HIDDEN_WIDTH, activation='relu')(encoded)
# "decoded" is the lossy reconstruction of the input
decoded = Dense(constants.NUM_INPUTS, activation='relu')(hidden)
# reshape to image dimensions
reshape = Reshape((constants.HEIGHT,constants.WIDTH,constants.CHANNELS))(decoded)

# this model maps an input to its reconstruction
autoencoder = Model(input_img, reshape)

autoencoder.summary()

autoencoder.compile(optimizer='adam', loss='mean_squared_error')

train_datagen = ImageDataGenerator(data_format='channels_last',
                                   rescale=1./255)

test_datagen = ImageDataGenerator(data_format='channels_last',
                                  rescale=1./255)

train_generator = train_datagen.flow_from_directory(
        constants.INPUT_PATH, 
        target_size=(constants.HEIGHT,constants.WIDTH),
        color_mode='rgb',
        class_mode='input',
        batch_size=constants.BATCH_SIZE)

validation_generator = test_datagen.flow_from_directory(
        constants.VALIDATION_PATH, 
        target_size=(constants.HEIGHT,constants.WIDTH),
        color_mode='rgb',
        class_mode='input',
        batch_size=constants.VALIDATION_SIZE)


autoencoder.fit_generator(train_generator,
        steps_per_epoch=constants.NUM_SAMPLES*1.0/constants.BATCH_SIZE,
        epochs=1,
        verbose=2,
        validation_data=validation_generator,
        validation_steps=constants.VALIDATION_SAMPLES*1.0/constants.VALIDATION_SIZE)


# Creates a HDF5 file 'my_model.h5'
autoencoder.save(constants.MODEL_PATH+constants.MODEL_FILE)
with open(constants.MODEL_PATH+constants.EPOCH_FILE, 'w') as f:
    f.write(str(1))

print("Done, model created in: " + constants.MODEL_PATH)

部分错误日志:

2019-01-29 16:40:10.522222: W tensorflow/core/common_runtime/bfc_allocator.cc:271] ***********************************************************************************************_____
2019-01-29 16:40:10.525191: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at matmul_op.cc:478 : Resource exhausted: OOM when allocating tensor with shape[60480,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
Traceback (most recent call last):
  File "init.py", line 53, in <module>
    validation_steps=constants.VALIDATION_SAMPLES*1.0/constants.VALIDATION_SIZE)
  File "C:\Users\dekke\Anaconda3\envs\tensorflow\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "C:\Users\dekke\Anaconda3\envs\tensorflow\lib\site-packages\keras\engine\training.py", line 1418, in fit_generator
    initial_epoch=initial_epoch)
  File "C:\Users\dekke\Anaconda3\envs\tensorflow\lib\site-packages\keras\engine\training_generator.py", line 217, in fit_generator
    class_weight=class_weight)
  File "C:\Users\dekke\Anaconda3\envs\tensorflow\lib\site-packages\keras\engine\training.py", line 1217, in train_on_batch
    outputs = self.train_function(ins)
  File "C:\Users\dekke\Anaconda3\envs\tensorflow\lib\site-packages\keras\backend\tensorflow_backend.py", line 2715, in __call__
    return self._call(inputs)
  File "C:\Users\dekke\Anaconda3\envs\tensorflow\lib\site-packages\keras\backend\tensorflow_backend.py", line 2675, in _call
    fetched = self._callable_fn(*array_vals)
  File "C:\Users\dekke\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1439, in __call__
    run_metadata_ptr)
  File "C:\Users\dekke\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 528, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[1024,60480] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node training/Adam/gradients/dense_4/MatMul_grad/MatMul_1}} = MatMul[T=DT_FLOAT, _class=["loc:@training/Adam/gradients/dense_4/MatMul_grad/MatMul"], transpose_a=true, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](dense_3/Relu, training/Adam/gradients/dense_4/Relu_grad/ReluGrad)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

1 个答案:

答案 0 :(得分:2)

我不时使用带有keras的anaconda tf_gpu软件包来获取此信息。我认为您要么通过python脚本耗尽了可用内存,要么tensorflow-gpu试图一次分配大量内存:

我通常将此东西放在我的进口货下,并且可以正常工作:

import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config = config)

# Check available GPU devices.
print("The following GPU devices are available: %s" % tf.test.gpu_device_name())

希望这会有所帮助。