tensorflow-per_process_gpu_memory_fraction的意外行为

时间:2018-09-10 19:24:40

标签: python-3.x tensorflow memory nvidia

我正在尝试将gpu内存使用量限制为gpu内存的10%,但是根据nvidia-smi,以下程序使用了大约13%的gpu。这是预期的行为吗?如果这是预期的行为,那么其他大约3-4%来自何处?

from time import sleep

i = tf.constant(0)
x = tf.constant(10)
r = tf.add(i,x)

# Use at most 10% of gpu memory, I expect this to set a hard limit
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=.1)

# sleep is used to see what nvidia-smi says for gpu memory usage, 
# I expect that it will be at most 10% of gpu memory (which is 1616.0 mib for my gpu)
# but instead I see the process using up to 2120 mib 
with tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) as sess:
        sess.run(r);
        sleep(10) 

有关我的环境和GPU的更多详细信息,请参见此github问题:https://github.com/tensorflow/tensorflow/issues/22158

1 个答案:

答案 0 :(得分:0)

根据我的实验,看起来cudnn和cublas上下文初始化大约需要228 mb的内存。另外,CUDA上下文可能会占用50到118 mb。