node.js中的对象检测模型预测时间与python

时间:2019-06-27 04:51:34

标签: python node.js tensorflow

我使用tfjs-converter从tensorflow model zoo转换了模型(ssd_mobilenet_v1_ppn_coco,ssd_mobilenet_v2_coco,ssd_mobilenet_v1_coco)。但是,我在node.js中的预测时间(每帧4秒)通常似乎与在python中使用相同模型(每帧0.1秒)时的预测时间有很大差异。是什么导致这种预测时间上的差异?编辑:我为Python使用了Frozen_model.pb,但是将还包括的save_model转换为tfjs模型。

作为参考,我将模型用于实时对象检测,使用opencv从网络摄像头读取输入。我无需训练即可直接使用模型,但是一旦解决了性能问题,我打算根据自己的数据进行训练。

我还为node.js测试了@ tensorflow-models中包含的coco_ssd实现,该实现引用了在这里找到的模型:ssdlite_mobilenet_v2。比较ssd_mobilenet_v2_coco(每帧3-4秒)和此模型(每帧0.3-0.7秒)之间的时间。我相信这表明我可以进一步优化动物园模型?

这是我用来在Node.js中对模型进行基准测试的代码:

async function load() {
    model = await tf.loadGraphModel('file://./javascript_models/ssdlite_mobilenet_v2_js/model.json')
    return model
}

const model = await load()

let t = Date.now() 
outputs = await model.executeAsync({image_tensor: input},
            ['detection_boxes:0', 
            'num_detections:0']
)
t = Date.now() - t
console.log('execute_time: ' + t)

这是我用来在python中对代码进行基准测试的代码:

#Load graph
def load():
    detection_graph = tf.Graph()
    with detection_graph.as_default():
        od_graph_def = tf.GraphDef()
        with tf.gfile.GFile('./mobilenet_ppn/frozen_inference_graph.pb', 'rb') as fid:
            serialized_graph = fid.read()
            od_graph_def.ParseFromString(serialized_graph)
            tf.import_graph_def(od_graph_def, name='')
    return detection_graph

detection_graph = load()
with detection_graph.as_default():
        with tf.Session(graph=detection_graph) as sess:
            # Definite input and output Tensors for detection_graph
            image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')

            # Each box represents a part of the image where a particular object was detected.
            detection_boxes = detection_graph.get_tensor_by_name(
                'detection_boxes:0')

            # Each score represent how level of confidence for each of the objects.
            # Score is shown on the result image, together with the class label.

            detection_scores = detection_graph.get_tensor_by_name(
                'detection_scores:0')
            detection_classes = detection_graph.get_tensor_by_name(
                'detection_classes:0')
            num_detections = detection_graph.get_tensor_by_name('num_detections:0')

t = time.time()
#the actual prediction
(boxes, scores, classes, num) = sess.run(
                    [detection_boxes, detection_scores,
                        detection_classes, num_detections],
                    feed_dict={image_tensor: frame_expanded})
#print the time taken to execute
print('execute time: ', time.time()- t)

我希望模型预测所需的时间是相同的,相反,与node.js相比,在python上的预测运行速度要快100倍。

0 个答案:

没有答案