在TensorFlow中使用多个GPU推断PB模型

时间:2019-01-18 02:45:24

标签: python-3.x tensorflow gpu

我使用配备8 Titan X的服务器,试图比使用单个GPU更快地预测图像。 我这样加载PB模型:

model_dir = "./model"
    model = "nasnet_large_v1.pb"
    model_path = os.path.join(model_dir, model)
    model_graph = tf.Graph()
    with model_graph.as_default():
        with tf.gfile.GFile(model_path, 'rb') as f:
            graph_def = tf.GraphDef()
            graph_def.ParseFromString(f.read())
            _ = tf.import_graph_def(graph_def, name='')
            input_layer = model_graph.get_tensor_by_name("input:0")
            output_layer = model_graph.get_tensor_by_name('final_layer/predictions:0')

然后我像下面这样开始遍历./data_input目录中的文件:

with tf.Session(graph = model_graph, config=config) as inference_session:
        # Initialize session
        initializer = np.zeros([1, 331, 331, 3])
        print("Initialing session...")
        inference_session.run(output_layer, feed_dict={input_layer: initializer})
        print("Done initialing.")

        # Prediction
        file_list = []
        processed_files = []

        for path, dir, files in os.walk('./model_output/processed_files'):
            for file in files:
                processed_files.append(file.split('_')[0]+'.tfrecord')

        print("Processed files: ")
        for f in processed_files:
            print('\t', f)

        while True:
            for path, dir, files in os.walk("./data_input"):
                for file in files:
                    if file == '.DS_Store': continue
                    if file in processed_files: continue
                    print("Reading file {}".format(file))
                    file_path = os.path.join('./data_input', file)
                    file_list.append(file_path)
                    res = predict(file_path)
                    processed_files.append(file)

                    with open('./model_output/processed_files/{}_{}_processed_files.json'.format(file.split('.')[0], model.split('.')[0]), 'w') as f:
                        f.write(json.dumps(processed_files))

                    with open('./model_output/classify_result/{}_{}_classify_result.json'.format(file.split('.')[0], model.split('.')[0]), 'w') as f:
                        f.write(json.dumps(res, indent=4, separators=(',',':')))

            time.sleep(1)

predict()函数中,我编写了这样的代码:

label_map = get_label()
    # read tfrecord file by tf.data
    dataset = get_dataset(filename)
    # dataset.apply(tf.contrib.data.prefetch_to_device("/gpu:0"))
    # load data
    iterator = dataset.make_one_shot_iterator()
    features = iterator.get_next()

    result = []
    content = {}
    count = 0
    # session
    with tf.Session() as sess:
        tf.global_variables_initializer()
        t1 = time.time()
        try:
            while True:
                [_image, _label, _filepath] = sess.run(fetches=features)
                _image = np.asarray([_image])
                _image = _image.reshape(-1, 331, 331, 3)

                predictions = inference_session.run(output_layer, feed_dict={input_layer: _image})
                predictions = np.squeeze(predictions)

                # res = []
                for i, pred in enumerate(predictions):
                    count += 1
                    overall_result = np.argmax(pred)
                    predict_result = label_map[overall_result].split(":")[-1]

                    if predict_result == 'unknown': continue

                    content['prob'] = str(np.max(pred))
                    content['label'] = predict_result
                    content['filepath'] = str(_filepath[i], encoding='utf-8')
                    result.append(content)

        except tf.errors.OutOfRangeError:
            t2 = time.time()
            print("{} images processed, average time: {}s".format(count, (t2-t1)/count))
    return result

我尝试在加载模型部分或推理会话部分或会话部分中使用with tf.device('/gpu:{}'.format(i))nvidia-smi显示只有GPU0被100%使用,而其他GPU似乎没有即使在加载内存时也可以工作。

我该怎么做才能使所有GPU同时运行以提高预测速度?

我的代码在https://github.com/tzattack/image_classification_algorithms下。

2 个答案:

答案 0 :(得分:0)

您可以通过以下方式为图中的每个节点强制使用设备:

def load_network(graph, i):
    od_graph_def = tf.GraphDef()
    with tf.gfile.GFile(graph, 'rb') as fid:
        serialized_graph = fid.read()
    od_graph_def.ParseFromString(serialized_graph)
    for node in od_graph_def.node:
        node.device = '/gpu:{}'.format(i) if i >= 0 else '/cpu:0'
    return {"od_graph_def": od_graph_def}

然后,您可以将获得的多个图形(每gpu)合并为一个
如果所有GPU都使用相同的模型,还可以更改张量名称
并全部用芝麻酱

非常适合我

答案 1 :(得分:0)

可以执行以下操作:

def get_frozen_graph(graph_file):
    """Read Frozen Graph file from disk."""
    with tf.gfile.GFile(graph_file, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
    return graph_def

trt_graph1 = get_frozen_graph('/home/ved/ved_1/frozen_inference_graph.pb')

with tf.device('/gpu:1'):
    [tf_input_l1, tf_scores_l1, tf_boxes_l1, tf_classes_l1, tf_num_detections_l1, tf_masks_l1] = tf.import_graph_def(trt_graph1, 
                    return_elements=['image_tensor:0', 'detection_scores:0', 
                    'detection_boxes:0', 'detection_classes:0','num_detections:0', 'detection_masks:0'])
    
tf_sess1 = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))

trt_graph2 = get_frozen_graph('/home/ved/ved_2/frozen_inference_graph.pb')

with tf.device('/gpu:0'):
    [tf_input_l2, tf_scores_l2, tf_boxes_l2, tf_classes_l2, tf_num_detections_l2] = tf.import_graph_def(trt_graph2, 
                    return_elements=['image_tensor:0', 'detection_scores:0', 
                    'detection_boxes:0', 'detection_classes:0','num_detections:0'])
    
tf_sess2 = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))