我正在使用gunicorn和flask开发一个简单的REST控制器。
在每个REST调用中,我执行以下代码
@app.route('/objects', methods=['GET'])
def get_objects():
video_title = request.args.get('video_title')
video_path = "../../video/" + video_title
cl.logger.info(video_path)
start = request.args.get('start')
stop = request.args.get('stop')
scene = [start, stop]
frames = images_utils.extract_frames(video_path, scene[0], scene[1], 1)
cl.logger.info(scene[0]+" "+scene[1])
objects = list()
##objects
model = GenericDetector('../resources/open_images/frozen_inference_graph.pb', '../resources/open_images/labels.txt')
model.run(frames)
for result in model.get_boxes_and_labels():
if result is not None:
objects.append(result)
data = {'message': {
'start_time': scene[0],
'end_time': scene[1],
'path': video_path,
'objects':objects,
}, 'metadata_type': 'detection'}
return jsonify({'status': data}), 200
此代码运行如下的tensorflow冻结模型:
class GenericDetector(Process):
def __init__(self, model, labels):
# ## Load a (frozen) Tensorflow model into memory.
self.detection_graph = tf.Graph()
with self.detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(model, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
self.boxes_and_labels = []
# ## Loading label map
with open(labels) as f:
txt_labels = f.read()
self.labels = json.loads(txt_labels)
def run(self, frames):
tf.reset_default_graph()
with self.detection_graph.as_default():
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
with tf.Session(graph=self.detection_graph, config=config) as sess:
image_tensor = self.detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
detection_boxes = self.detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
detection_scores = self.detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = self.detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = self.detection_graph.get_tensor_by_name('num_detections:0')
i = 0
for frame in frames:
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(frame, axis=0)
# Actual detection.
(boxes, scores, classes, num) = sess.run(
[detection_boxes, detection_scores, detection_classes, num_detections], \
feed_dict={image_tensor: image_np_expanded})
boxes = np.squeeze(boxes)
classes = np.squeeze(classes).astype(np.int32)
scores = np.squeeze(scores)
for j, box in enumerate(boxes):
if all(v == 0 for v in box):
continue
self.boxes_and_labels.append(
{
"ymin": str(box[0]),
"xmin": str(box[1]),
"ymax": str(box[2]),
"xmax": str(box[3]),
"label": self.labels[str(classes[j])],
"score": str(scores[j]),
"frame":i
})
i += 1
sess.close()
def get_boxes_and_labels(self):
return self.boxes_and_labels
一切似乎都可以正常进行,但是一旦我向服务器发送第二个请求,我的GPU(GTX 1050)就会内存不足:
ResourceExhaustedError(请参阅上面的回溯):分配时为OOM [3,3,256,256]形状的张量,类型为float
如果在此之后我尝试拨打电话,则大多数情况下都可以使用。有时它也可以在后续呼叫中使用。我尝试在单独的流程上执行GenericDetector(使GEnericDetector成为hereditate流程),但这没有帮助。我读到,一旦执行REST GET的进程死了,就应该释放GPU的内存,因此我还尝试在执行tensorflow模型后添加sleep(30),但没有运气。我做错了什么?
答案 0 :(得分:1)
问题是Tensorflow为进程而不是Session分配了内存,关闭会话是不够的(即使您放置了allow_growth option
)。
第一个是allow_growth选项,该选项尝试根据运行时分配仅分配尽可能多的GPU内存:它开始分配的内存很少,并且随着Sessions的运行和需要更多GPU内存,我们扩展了GPU内存区域TensorFlow流程需要的。 请注意,我们不会释放内存,因为这会导致更严重的内存碎片。
在TF github上有一个issue,其中包含一些解决方案,例如,您可以使用线程中建议的RunAsCUDASubprocess
装饰您的run方法。
答案 1 :(得分:0)
此错误表示您试图将比可用内存更大的内存装入GPU。也许您可以减少模型中某处的参数数量,以使其更轻便?