我做了一个基本上使用googles对象检测api和tensorflow的项目。
我所做的只是使用预先训练的模型进行推理:这意味着实时物体检测,其中输入是网络摄像头的Videostream或类似的使用OpenCV。
现在我的表现相当不错,但我想进一步提高FPS。
因为我所体验的是Tensorflow在推理时使用了我的整个内存,但GPU使用率并没有达到最大值(使用NVIDIA GTX 1050笔记本电脑约为40%,NVIDIA Jetson Tx2为6%)。
所以我的想法是通过增加每次会话运行中的图像批量大小来增加GPU使用率。
所以我的问题是:在将它们送到sess.run()
之前,如何将Input-Videostream的多个帧一起批处理?
在我的github repo上查看我的代码object_detetection.py
:( https://github.com/GustavZ/realtime_object_detection)。
如果您想出一些提示或代码实现,我将非常感激!
import numpy as np
import os
import six.moves.urllib as urllib
import tarfile
import tensorflow as tf
import cv2
# Protobuf Compilation (once necessary)
os.system('protoc object_detection/protos/*.proto --python_out=.')
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
from stuff.helper import FPS2, WebcamVideoStream
# INPUT PARAMS
# Must be OpenCV readable
# 0 = Default Camera
video_input = 0
visualize = True
max_frames = 300 #only used if visualize==False
width = 640
height = 480
fps_interval = 3
bbox_thickness = 8
# Model preparation
# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = 'models/' + MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
LABEL_MAP = 'mscoco_label_map.pbtxt'
PATH_TO_LABELS = 'object_detection/data/' + LABEL_MAP
NUM_CLASSES = 90
# Download Model
if not os.path.isfile(PATH_TO_CKPT):
print('Model not found. Downloading it now.')
opener = urllib.request.URLopener()
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
file_name = os.path.basename(file.name)
if 'frozen_inference_graph.pb' in file_name:
tar_file.extract(file, os.getcwd())
os.remove('../' + MODEL_FILE)
else:
print('Model found. Proceed.')
# Load a (frozen) Tensorflow model into memory.
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
# Loading label map
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
# Start Video Stream
video_stream = WebcamVideoStream(video_input,width,height).start()
cur_frames = 0
# Detection
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
# Definite input and output Tensors for detection_graph
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
# fps calculation
fps = FPS2(fps_interval).start()
print ("Press 'q' to Exit")
while video_stream.isActive():
image_np = video_stream.read()
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
(boxes, scores, classes, num) = sess.run(
[detection_boxes, detection_scores, detection_classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=bbox_thickness)
if visualize:
cv2.imshow('object_detection', image_np)
# Exit Option
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
cur_frames += 1
if cur_frames >= max_frames:
break
# fps calculation
fps.update()
# End everything
fps.stop()
video_stream.stop()
cv2.destroyAllWindows()
print('[INFO] elapsed time (total): {:.2f}'.format(fps.elapsed()))
print('[INFO] approx. FPS: {:.2f}'.format(fps.fps()))
答案 0 :(得分:0)
好吧,我只是收集batch_size
帧并提供它们:
batch_size = 5
while video_stream.isActive():
image_np_list = []
for _ in range(batch_size):
image_np_list.append(video_stream.read())
fps.update()
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.asarray(image_np_list)
# Actual detection.
(boxes, scores, classes, num) = sess.run(
[detection_boxes, detection_scores, detection_classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of a detection.
for i in range(batch_size):
vis_util.visualize_boxes_and_labels_on_image_array(
image_np_expanded[i],
boxes[i],
classes[i].astype(np.int32),
scores[i],
category_index,
use_normalized_coordinates=True,
line_thickness=bbox_thickness)
if visualize:
cv2.imshow('object_detection', image_np_expanded[i])
# Exit Option
if cv2.waitKey(1) & 0xFF == ord('q'):
break
当然,如果您正在阅读检测结果,那么您必须在此之后进行相关更改,因为它们现在会有batch_size
行。
请注意:在tensorflow 1.4(我认为)之前,image_tensor
中的对象检测API only supports batch size of 1,所以除非你升级张量流,否则这将不起作用。
另请注意,您生成的FPS将是平均值,但同一批次中的帧实际上会比不同批次之间的帧更接近(因为您仍需要等待sess.run()
完成)。尽管两个连续帧之间的最大时间应该增加,但平均值仍然应该明显优于当前的FPS。
如果你希望你的帧之间的间隔大致相同,我想你需要更多复杂的工具,如多线程和排队:一个线程会从流中读取图像并将它们存储在队列中,另一个将从队列中取出它们并异步调用它们sess.run()
;它还可以告诉第一个线程根据自己的计算能力加快或减速。实施起来比较棘手。