|进程:GPU内存|
| GPU PID类型进程名称用法
| 0 6944 C python3 11585MiB |
| 1 6944 C python3 11587MiB |
| 2 6944 C python3 10621MiB |
在中间停止张量流后,nvidia-smi
内存不会被释放。
试图使用此
config = tf.ConfigProto()
config.gpu_options.allocator_type = 'BFC'
config.gpu_options.per_process_gpu_memory_fraction = 0.90
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
另外
with tf.device('/gpu:0'):
with tf.Graph().as_default():
尝试重置GPU
sudo nvidia-smi --gpu-reset -i 0
根本无法释放内存。
答案 0 :(得分:1)
解决方案来自Tensorflow set CUDA_VISIBLE_DEVICES within jupyter感谢Yaroslav。
大多数信息都是从Tensorflow Stackoverflow文档中获得的。我不被允许发布不确定原因。
在代码的开头插入。
from tensorflow.python.client import device_lib
# Set the environment variables
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
# Double check that you have the correct devices visible to TF
print("{0}\nThe available CPU/GPU devices on your system\n{0}".format('=' * 100))
print(device_lib.list_local_devices())
Different options to start with GPU or CPU. I am using the CPU. Can be changed from the below options
with tf.device('/cpu:0'):
# with tf.device('/gpu:0'):
# with tf.Graph().as_default():
在会话中使用以下行:
config = tf.ConfigProto(device_count={'GPU': 1}, log_device_placement=False,
allow_soft_placement=True)
# allocate only as much GPU memory based on runtime allocations
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
# Session needs to be closed
sess.close()
with tf.Session(config=config) as sess:
另一篇了解'with'重要性的有用文章 请检查tensorflow的官方tf.Session()。
To find out which devices your operations and tensors are assigned to, create the session with
log_device_placement configuration option set to True.
TensorFlow to automatically choose an existing and supported device to run the operations in case the specified
one doesn't exist, you can set allow_soft_placement=True in the configuration option when creating the session.