在GPU上运行时使用TensorFlow内存:为什么看起来并非所有内存都被使用?

时间:2018-02-10 20:18:59

标签: python tensorflow memory memory-management deep-learning

这是我在此处发布的问题的后续跟踪:Memory error with larger images when running convolutional neural network using TensorFlow on AWS instance g2.2xlarge

我使用TensorFlow在Python中构建了一个CNN模型,并在NVIDIA GRID K520 GPU上运行它。它可以在64x64图像下正常运行,但会产生128x128图像的内存错误(即使输入只包含1张图像)。

错误说Ran out of memory trying to allocate 2.00GiB. 2GiB是我第一个完全连接的图层的大小(输入:128*128*2(channels)输出:128*128 * 4 bytes = 2.14748 GB = 2.0 GiB)。

here,我可以看到GRID K520有8GB = 7.45GiB内存。当我开始运行我的代码时,我也看到了输出:Total memory: 3.94GiB, Free memory: 3.91GiB

我的问题是,所有这些数字之间的关系是什么:如果GPU上有7.45GiB内存,为什么总内存只有3.94GiB,最重要的是,为什么GPU不能分配2GiB内存,这就是上面总记忆的一半? (我不是计算机科学家,所以详细的答案很有价值。)

一些更具体的信息,以防它有用: 我尝试使用allow_growthper_process_gpu_memory_fraction。仍然得到内存错误,但也有一些内存统计数据(如果有人能向我解释这些数字,我真的很感激):

allow_growth = True
Stats: 
Limit:                  3878682624
InUse:                  2148557312
MaxInUse:               2148557312
NumAllocs:                      13
MaxAllocSize:           2147483648

allow_growth = False
Stats: 
Limit:                  3878682624
InUse:                  3878682624
MaxInUse:               3878682624
NumAllocs:                      13
MaxAllocSize:           3877822976

per_process_gpu_memory_fraction = 0.5
allow_growth = False
Stats: 
Limit:                  2116026368
InUse:                      859648
MaxInUse:                   859648
NumAllocs:                      12
MaxAllocSize:               409600

per_process_gpu_memory_fraction = 0.5
allow_growth = True
Stats: 
Limit:                  2116026368
InUse:                     1073664
MaxInUse:                  1073664
NumAllocs:                      12
MaxAllocSize:               623616

最小工作示例:使用与我输入的图像大小相同的虚拟训练集,并且只有一个完全连接的图层(完整模型代码为here)。此示例适用于大小输入:

X_train = np.random.rand(1, 64, 64, 2)
Y_train = np.random.rand(1, 64, 64)

但不适用于尺寸

的输入
X_train = np.random.rand(1, 128, 128, 2)
Y_train = np.random.rand(1, 128, 128) 

代码:

import numpy as np
import tensorflow as tf


# Dummy training set:
X_train = np.random.rand(1, 128, 128, 2)
Y_train = np.random.rand(1, 128, 128)
print('X_train.shape at input = ', X_train.shape, ", Size = ",
      X_train.shape[0] * X_train.shape[1] * X_train.shape[2]
      * X_train.shape[3])
print('Y_train.shape at input = ', Y_train.shape, ", Size = ",
      Y_train.shape[0] * Y_train.shape[1] * Y_train.shape[2])


def create_placeholders(n_H0, n_W0):

    x = tf.placeholder(tf.float32, shape=[None, n_H0, n_W0, 2], name='x')
    y = tf.placeholder(tf.float32, shape=[None, n_H0, n_W0], name='y')

    return x, y


def forward_propagation(x):

    x_temp = tf.contrib.layers.flatten(x)  # size (n_im, n_H0 * n_W0 * 2)
    n_out = np.int(x.shape[1] * x.shape[2])  # size (n_im, n_H0 * n_W0)

    # FC: input size (n_im, n_H0 * n_W0 * 2), output size (n_im, n_H0 * n_W0)
    FC1 = tf.contrib.layers.fully_connected(
        x_temp,
        n_out,
        activation_fn=tf.tanh,
        normalizer_fn=None,
        normalizer_params=None,
        weights_initializer=tf.contrib.layers.xavier_initializer(),
        weights_regularizer=None,
        biases_initializer=None,
        biases_regularizer=None,
        reuse=True,
        variables_collections=None,
        outputs_collections=None,
        trainable=True,
        scope='fc1')

    # Reshape output from FC layer into array of size (n_im, n_H0, n_W0, 1):
    FC_M = tf.reshape(FC1, [tf.shape(x)[0], tf.shape(x)[1], tf.shape(x)[2], 1])

    return FC_M


def compute_cost(FC_M, Y):

    cost = tf.square(FC_M - Y)

    return cost


def model(X_train, Y_train, learning_rate=0.0001, num_epochs=100):

    (m, n_H0, n_W0, _) = X_train.shape

    # Create Placeholders
    X, Y = create_placeholders(n_H0, n_W0)

    # Build the forward propagation
    DECONV = forward_propagation(X)

    # Add cost function to tf graph
    cost = compute_cost(DECONV, Y)

    # Backpropagation
    optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(cost)

    # Initialize all the variables globally
    init = tf.global_variables_initializer()

    # Memory config
    config = tf.ConfigProto()
    config.gpu_options.allow_growth = True

    # Start the session to compute the tf graph
    with tf.Session(config = config) as sess:

        # Initialization
        sess.run(init)

        # Training loop
        for epoch in range(num_epochs):

            _, temp_cost = sess.run([optimizer, cost],
                                    feed_dict={X: X_train, Y: Y_train})

            print ('EPOCH = ', epoch, 'COST = ', np.mean(temp_cost))


# Finally run the model
model(X_train, Y_train, learning_rate=0.00002, num_epochs=5)

追溯:

W tensorflow/core/common_runtime/bfc_allocator.cc:274] ****************************************************************************************************
W tensorflow/core/common_runtime/bfc_allocator.cc:275] Ran out of memory trying to allocate 2.00GiB.  See logs for memory state.
W tensorflow/core/framework/op_kernel.cc:983] Internal: Dst tensor is not initialized.
E tensorflow/core/common_runtime/executor.cc:594] Executor failed to create kernel. Internal: Dst tensor is not initialized.
     [[Node: zeros = Const[dtype=DT_FLOAT, value=Tensor<type: float shape: [32768,16384] values: [0 0 0]...>, _device="/job:localhost/replica:0/task:0/gpu:0"]()]]
Traceback (most recent call last):
  File "myAutomap_MinExample.py", line 99, in <module>
    num_epochs=5)
  File "myAutomap_MinExample.py", line 85, in model
    sess.run(init)
  File "/home/ubuntu/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 767, in run
    run_metadata_ptr)
  File "/home/ubuntu/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 965, in _run
    feed_dict_string, options, run_metadata)
  File "/home/ubuntu/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1015, in _do_run
    target_list, options, run_metadata)
  File "/home/ubuntu/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1035, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: Dst tensor is not initialized.
     [[Node: zeros = Const[dtype=DT_FLOAT, value=Tensor<type: float shape: [32768,16384] values: [0 0 0]...>, _device="/job:localhost/replica:0/task:0/gpu:0"]()]]

Caused by op u'zeros', defined at:
  File "myAutomap_MinExample.py", line 99, in <module>
    num_epochs=5)
  File "myAutomap_MinExample.py", line 72, in model
    optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(cost)
  File "/home/ubuntu/.local/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 289, in minimize
    name=name)
  File "/home/ubuntu/.local/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 403, in apply_gradients
    self._create_slots(var_list)
  File "/home/ubuntu/.local/lib/python2.7/site-packages/tensorflow/python/training/rmsprop.py", line 103, in _create_slots
    self._zeros_slot(v, "momentum", self._name)
  File "/home/ubuntu/.local/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 647, in _zeros_slot
    named_slots[var] = slot_creator.create_zeros_slot(var, op_name)
  File "/home/ubuntu/.local/lib/python2.7/site-packages/tensorflow/python/training/slot_creator.py", line 121, in create_zeros_slot
    val = array_ops.zeros(primary.get_shape().as_list(), dtype=dtype)
  File "/home/ubuntu/.local/lib/python2.7/site-packages/tensorflow/python/ops/array_ops.py", line 1352, in zeros
    output = constant(zero, shape=shape, dtype=dtype, name=name)
  File "/home/ubuntu/.local/lib/python2.7/site-packages/tensorflow/python/framework/constant_op.py", line 103, in constant
    attrs={"value": tensor_value, "dtype": dtype_value}, name=name).outputs[0]
  File "/home/ubuntu/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2327, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/ubuntu/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1226, in __init__
    self._traceback = _extract_stack()

InternalError (see above for traceback): Dst tensor is not initialized.
     [[Node: zeros = Const[dtype=DT_FLOAT, value=Tensor<type: float shape: [32768,16384] values: [0 0 0]...>, _device="/job:localhost/replica:0/task:0/gpu:0"]()]]

2 个答案:

答案 0 :(得分:3)

如果您可以上传您的代码或至少是一个最小的示例,以便了解正在发生的事情,这将是一件好事。只看这些数字,似乎allow_growth正在按原样运行,也就是说,它只分配它实际需要的内存量(上面计算的2.148 GiB)。

您也可以提供您获得的错误的完整控制台输出。 我的猜测是,您正在混淆来自TF资源分配器的非致命警告消息,指出导致程序失败的实际错误。

这与您看到的消息类似吗? W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_1_bfc) ran out of memory trying to allocate 2.55GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.

因为这只是一个警告,您可以忽略,除非您想优化代码的运行时性能。它不会导致程序失败。

答案 1 :(得分:1)

查看错误日志,或者您的GPU内存不足或者此时未启动张量。您可以尝试在启动问题的行(99)之前插入Tensor :: IsInitialized以确保它是GPU,如果是,您可能还有一些代码仍然在GPU中运行,从之前的尝试开始,make确定没有发生。 我认为有两个讨论可能与您的问题相关,在这里:https://github.com/tensorflow/tensorflow/issues/7025和此处:https://github.com/aymericdamien/TensorFlow-Examples/issues/38 祝你好运