我在Python中开发了一个使用Tensorflow的应用程序和另一个使用GPU的模型。 我有一台带有许多GPU(3xNVIDIA GTX1080)的PC,由于所有型号都试图使用所有可用的GPU,导致OUT_OF_MEMORY_ERROR,我发现你可以用Python
将特定的GPU分配给Python脚本class FCN:
def __init__(self):
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
self.keep_probability = tf.placeholder(tf.float32, name="keep_probabilty")
self.image = tf.placeholder(tf.float32, shape=[None, IMAGE_SIZE, IMAGE_SIZE, 3], name="input_image")
self.annotation = tf.placeholder(tf.int32, shape=[None, IMAGE_SIZE, IMAGE_SIZE, 1], name="annotation")
self.pred_annotation, logits = inference(self.image, self.keep_probability)
tf.summary.image("input_image", self.image, max_outputs=2)
tf.summary.image("ground_truth", tf.cast(self.annotation, tf.uint8), max_outputs=2)
tf.summary.image("pred_annotation", tf.cast(self.pred_annotation, tf.uint8), max_outputs=2)
self.loss = tf.reduce_mean((tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits,
labels=tf.squeeze(self.annotation,
squeeze_dims=[3]),
name="entropy")))
tf.summary.scalar("entropy", self.loss)
...
这里我附上了我的FCN课程的片段
FCN.py
在同一个文件if __name__ == "__main__":
fcn = FCN()
fcn.train_model()
images_dir = '/home/super/datasets/MeterDataset/full-dataset-gas-images/'
for img_file in os.listdir(images_dir):
fcn.segment(os.path.join(images_dir, img_file))
中,我有一个使用该类的主要内容,当Tensorflow打印输出时,我可以看到只使用了1个GPU,正如我所料。
2018-01-09 11:31:57.351029: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties:
name: GeForce GTX 1080
major: 6 minor: 1 memoryClockRate (GHz) 1.7335
pciBusID 0000:09:00.0
Total memory: 7.92GiB
Free memory: 7.60GiB
2018-01-09 11:31:57.351047: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0
2018-01-09 11:31:57.351051: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0: Y
2018-01-09 11:31:57.351057: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:09:00.0)
输出:
def main(args):
start_time = datetime.now()
font = cv2.FONT_HERSHEY_SIMPLEX
results_file = "../results.txt"
if os.path.exists(results_file):
os.remove(results_file)
results_file = open(results_file, "a")
fcn = FCN()
当我尝试从另一个脚本实例化FCN对象时出现问题。
__init__()
此处对象的创建始终使用所有3个GPU,而不是使用仅分配到2018-01-09 11:41:02.537548: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 1 2
2018-01-09 11:41:02.537555: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0: Y Y Y
2018-01-09 11:41:02.537558: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 1: Y Y Y
2018-01-09 11:41:02.537561: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 2: Y Y Y
2018-01-09 11:41:02.537567: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:0b:00.0)
2018-01-09 11:41:02.537571: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX 1080, pci bus id: 0000:09:00.0)
2018-01-09 11:41:02.537574: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:2) -> (device: 2, name: GeForce GTX 1080, pci bus id: 0000:05:00.0)
方法。
这里是不受欢迎的输出:
function convertToRoman(num) {
var toTen = ["", "I", "II", "III", "IV", "V", "VI", "VII", "VIII", "IX", "X"];
var toHungred = ["", "X" ,"XX", "XXX", "XL", "L", "LX", "LXX", "LXXX", "XC", "C"];
var toThousend = ["", "C", "CC", "CCC", "CD", "D", "DC", "DCC", "DCCC", "CM", "M"];
var arrString = String(num).split("");
var arr = [];
if (arrString.length == 3 ){
arr.push(toThousend[arrString[+0]]);
arr.push(toHungred[arrString[+1]]);
arr.push(toTen[arrString[+2]]);
}
else if (arrString.length == 2 ){
arr.push(toHungred[arrString[+0]]);
arr.push(toTen[arrString[+1]]);
}
else if (arrString.length == 1 ){
arr.push(toTen[arrString[+0]]);
}
else if (arrString.length == 4 ) {
for (var i =1; i<=[arrString[+0]]; i++) {
arr.push("M");
}
arr.push(toThousend[arrString[+1]]);
arr.push(toHungred[arrString[+2]]);
arr.push(toTen[arrString[+3]]);
}
console.log (arr.join(""));
}
convertToRoman(36);
答案 0 :(得分:7)
以下是您可以做的事情:
运行已设置CUDA_VISIBLE_DEVICES
环境变量的脚本,为discussed here:
CUDA_VISIBLE_DEVICES=1 python another_script.py
为Session
构造函数提供显式配置:
config = tf.ConfigProto(device_count={'GPU': 1})
sess = tf.Session(config=config)
...强制tensorflow只使用一个GPU,无论有多少GPU可用。您还可以通过visible_device_list
设置细粒度的设备列表(详见config.proto
)。