我想要张量流模型中预测边界框的坐标。
我正在使用here中的对象检测脚本。
在关于stackoverflow的一些答案之后,我将检测的最后一块修改为
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
output_dict = run_inference_for_single_image(image_np, detection_graph)
# Visualization of the results of a detection.
width, height = image.size
print(width,height)
ymin = output_dict['detection_boxes'][5][0]*height
xmin = output_dict['detection_boxes'][5][1]*width
ymax = output_dict['detection_boxes'][5][2]*height
xmax = output_dict['detection_boxes'][5][3]*width
#print(output_dict['detection_boxes'][0])
print (xmin,ymin)
print (xmax,ymax)
但是output_dict ['detection_boxes']中有100个元组。
即使对于无法预测的图像,也有100个元组
我想要的是单个图像所有边界框的坐标。
答案 0 :(得分:1)
expand_dims行之后,您可以添加这些代码。 filtered_boxes变量将给出其预测值大于0.5的边界框。
(boxes, scores, classes, num) = sess.run(
[detection_boxes, detection_scores, detection_classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
indexes = []
import os
for i in range (classes.size):
if(classes[0][i] in range(1,91) and scores[0][i]>0.5):
indexes.append(i)
filtered_boxes = boxes[0][indexes, ...]
filtered_scores = scores[0][indexes, ...]
filtered_classes = classes[0][indexes, ...]
filtered_classes = list(set(filtered_classes))
filtered_classes = [int(i) for i in filtered_classes]
答案 1 :(得分:0)
如果检查所用模型的pipeline.config文件,则可以看到在某些地方,最大盒子数设置为100。 例如,在演示笔记本中的模型config file of ssd_mobilenet_v1中,您可以在下面看到它
post_processing {
batch_non_max_suppression {
...
max_detections_per_class: 100
max_total_detections: 100
}
}
这也是输入阅读器的默认设置(对于培训和评估),您可以更改它们,但这仅与您正在培训/评估有关。如果您想要推理而无需重新训练模型,则可以简单地使用预先训练的模型(同样,例如ssd_mobilenet_v1),并在使用--config_override
参数时自己exporting来更改模型。我在网管系统中提到的值。
答案 2 :(得分:0)
for image_path in TEST_IMAGE_PATHS:
image_np = cv2.imread(image_path)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
output_dict = run_inference_for_single_image(image_np, detection_graph)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks'),
use_normalized_coordinates=True,
line_thickness=8)
#if using cv2 to load image
(im_width, im_height) = image_np.shape[:2]
ymin = output_dict['detection_boxes'][0][0]*im_height
xmin = output_dict['detection_boxes'][0][1]*im_width
ymax = output_dict['detection_boxes'][0][2]*im_height
xmax = output_dict['detection_boxes'][0][3]*im_width
使用上述代码,您将获得检测到的类的所需边界框坐标,该类的最大分数位于第一个方括号指示的第0个位置。