Python Tensorflow推理多处理Threading.lock对象错误

时间:2019-12-26 23:53:08

标签: python tensorflow gpu

我正在尝试为下面的Python Tensorflow推理脚本实现多处理,但是我不断收到此错误:

“ TypeError:无法腌制_thread.RLock对象”

根据我的在线阅读,我意识到这是因为我无法腌制threading.lock对象,因为该对象属于操作系统,但是我不太确定如何解决此问题。

如果有帮助,我正在尝试在单个GPU上进行多处理。可能吗还是只能在每个GPU上运行一个进程?

import multiprocessing
def perform_inference_processed(sess, curr_graph, meta_graph_def, input_list, per_model_output):
    with sess.as_default():
        with curr_graph.as_default():
            signature = meta_graph_def.signature_def

            x_inp = sess.graph.get_tensor_by_name(x_tensor_name)
            tflag_op = sess.graph.get_tensor_by_name(training_flag_tensor_name)
            y_op = sess.graph.get_tensor_by_name(y_tensor_name)

            tmp_output = sess.run(y_op, {x_inp: input_list, tflag_op: False})
            per_model_output.append(tmp_output)

def perform_inference_processed_parent(input_list):
    per_model_output = []
    process_list = [] # list of threads

    for model_idx in range(len(model_names)):
        sess = cr_sessions[model_idx]
        curr_graph = cr_graphs[model_idx]
        meta_graph_def = meta_graph_definitions[model_idx]
        p = multiprocessing.Process(target=perform_inference_processed, args=(sess, 
                                                                      curr_graph,
                                                                      meta_graph_def,
                                                                      input_list,
                                                                      per_model_output))
        process_list.append(p)
        p.start()

    for p in process_list:
        p.join()

    return per_model_output

0 个答案:

没有答案