我想实时处理几个视频流。我接受了this gist并进行了一些更改。这是我现在拥有的:
在主线程中运行一次。 self
是主类的唯一对象:
self.detection_graph = tf.Graph()
with self.detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(model_path, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
在每个线程中运行。现在self
是扩展Thread
的类的对象:
self.default_graph = self.detection_graph.as_default()
self.sess = tf.Session(graph=self.detection_graph)
self.image_tensor = self.detection_graph.get_tensor_by_name('image_tensor:0')
self.detection_boxes = self.detection_graph.get_tensor_by_name('detection_boxes:0')
self.detection_scores = self.detection_graph.get_tensor_by_name('detection_scores:0')
self.detection_classes = self.detection_graph.get_tensor_by_name('detection_classes:0')
self.num_detections = self.detection_graph.get_tensor_by_name('num_detections:0')
# ...
frame_np_expanded = np.expand_dims(frame, axis=0)
boxes, scores, classes, num = self.sess.run(
[self.detection_boxes, self.detection_scores,
self.detection_classes, self.num_detections],
feed_dict={self.image_tensor: frame_np_expanded})
# ...
self.sess.close()
现在看起来它在运行时是并行的,但此刻甚至单个检测器线程都消耗了我拥有的所有3 GB视频内存,在获得更多信息之前,我想知道: