我有一个简单的问题,但我无法弄明白该怎么做。我使用TF对象检测API来检测图像,它工作正常并给出一个图像,它将绘制带有标签和置信度分数的边界框,它认为它检测到的类别。我的问题是如何将检测到的类(作为字符串)和分数打印到终端,即不仅在图像上,而且作为终端的输出。
以下是负责图像检测的代码
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
scores = detection_graph.get_tensor_by_name('detection_scores:0')
classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
# Actual detection.
(boxes, scores, classes, num_detections) = sess.run(
[boxes, scores, classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8, min_score_thresh=.2)
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(image_np)
plt.show()
先谢谢,Stack Overflow上的第一篇文章,所以请放轻松我
答案 0 :(得分:9)
那很容易。 classes
category_index
加密了dict
,因此您可以执行以下操作:
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
scores = detection_graph.get_tensor_by_name('detection_scores:0')
classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
# Actual detection.
(boxes, scores, classes, num_detections) = sess.run(
[boxes, scores, classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Here output the category as string and score to terminal
print([category_index.get(i) for i in classes[0]])
print(scores)
答案 1 :(得分:6)
只需转到object_detection文件夹中的utils目录,然后打开脚本 visualization_utils.py 。您将找到一个函数,即 visualize_boxes_and_labels_on_image_array ,在函数末尾添加一个打印命令以打印变量 class_name ( print(class_name))。现在运行你的代码,看看魔术。
答案 2 :(得分:1)
Dat和Omar ..我有一个基本问题..当我们打印数组时,它包含前100个分数和类的数组。其中只有2到3个实际显示在输出图像中(有界盒子和准确性)。如何仅对输出图像中实际显示的值进行子集化?是否可能或我们是否需要设置固定的准确度阈值? (并且可能会丢失输出图像中显示的某些对象)。
答案 3 :(得分:1)
下面是解决您的问题的代码。 TF版本1.12.0 我使用网络摄像头进行测试。
从.. \ models \ research \ object_detection \ utils \ visualization_utils.py转到def visualize_boxes_and_labels_on_image_array并修复for循环。
在定义了display_str之后打印display_str(我相信第21行),如果在for循环的末尾打印,则会得到分配之前引用的class_name错误。每当通过我的相机供稿检测不到物体时,如果按照Ravish的建议在底部添加了打印语句,就会收到此错误。
for i in range(min(max_boxes_to_draw, boxes.shape[0])):
if scores is None or scores[i] > min_score_thresh:
box = tuple(boxes[i].tolist())
if instance_masks is not None:
box_to_instance_masks_map[box] = instance_masks[i]
if instance_boundaries is not None:
box_to_instance_boundaries_map[box] = instance_boundaries[i]
if keypoints is not None:
box_to_keypoints_map[box].extend(keypoints[i])
if scores is None:
box_to_color_map[box] = groundtruth_box_visualization_color
else:
display_str = ''
if not skip_labels:
if not agnostic_mode:
if classes[i] in category_index.keys():
class_name = category_index[classes[i]]['name']
else:
class_name = 'N/A'
display_str = str(class_name)
print(display_str)
if not skip_scores:
if not display_str:
display_str = '{}%'.format(int(100*scores[i]))
else:
display_str = '{}: {}%'.format(display_str, int(100*scores[i]))
box_to_display_str_map[box].append(display_str)
if agnostic_mode:
box_to_color_map[box] = 'DarkOrange'
else:
box_to_color_map[box] = STANDARD_COLORS[
classes[i] % len(STANDARD_COLORS)]
#(print(class_name)) -- doesn't work : error, class name referenced before assignment
答案 4 :(得分:0)
我一开始也很困惑。无论我的图像上只画了一个,都有100多个盒子。 同意所有答案。对于您的inferecne.py,有简单的粘贴解决方案:
#assume you've got this in your inference.py
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks'),
use_normalized_coordinates=True,
line_thickness=8)
# This is the way I'm getting my coordinates
boxes = output_dict['detection_boxes']
max_boxes_to_draw = boxes.shape[0]
scores = output_dict['detection_scores']
min_score_thresh=.5
for i in range(min(max_boxes_to_draw, boxes.shape[0])):
if scores is None or scores[i] > min_score_thresh:
# boxes[i] is the box which will be drawn
print ("This box is gonna get used", boxes[i])