如何在YOLO对象检测中获取边界框的坐标?

时间:2017-06-14 12:12:25

标签: computer-vision deep-learning object-detection

enter image description here

我需要使用YOLO对象检测来获取上图中生成的边界框坐标。

6 个答案:

答案 0 :(得分:7)

快速解决方案是修改image.c文件以打印出边界框信息:

...
if(bot > im.h-1) bot = im.h-1;

// Print bounding box values 
printf("Bounding Box: Left=%d, Top=%d, Right=%d, Bottom=%d\n", left, top, right, bot); 
draw_box_width(im, left, top, right, bot, width, red, green, blue);
...

答案 1 :(得分:3)

有一个很好的小python(2 - 但只需要很少的修改3. [只需将打印和字符串更改为]中的二进制字符串)程序,您可以在主仓库中使用{{ 3}}

请注意!给定的坐标是中点,宽度和高度。

答案 2 :(得分:1)

如果要在python中实现此功能,那么我在here中创建了一个小的python包装器。按照ReadMe文件进行安装。它将非常容易安装。

此后,请遵循此example code来了解如何检测对象。
如果您检测到的是det

top_left_x = det.bbox.x
top_left_y = det.bbox.y
width = det.bbox.w
height = det.bbox.h

如果需要,可以通过以下方式获得中点:

mid_x, mid_y = det.bbox.get_point(pyyolo.BBox.Location.MID)

希望这会有所帮助。

答案 3 :(得分:1)

对于Windows中的python用户:

首先...,进行一些设置工作:

  1. 在环境路径中设置darknet文件夹的python路径:

    PYTHONPATH = 'YOUR DARKNET FOLDER'

  2. 通过添加PYTHONPATH到路径值:

    %PYTHONPATH%

  3. 通过将coco.data文件夹变量更改为cfg folder文件夹,来对names中的{li>

    编辑文件coco.names

    names = D:/core/darknetAB/data/coco.names

使用此设置,您可以从任何文件夹中调用darknet.py(来自alexeyAB\darknet存储库)作为python模块。

开始编写脚本:

from darknet import performDetect as scan #calling 'performDetect' function from darknet.py

def detect(str):
    ''' this script if you want only want get the coord '''
    picpath = str
    cfg='D:/core/darknetAB/cfg/yolov3.cfg' #change this if you want use different config
    coco='D:/core/darknetAB/cfg/coco.data' #you can change this too
    data='D:/core/darknetAB/yolov3.weights' #and this, can be change by you
    test = scan(imagePath=picpath, thresh=0.25, configPath=cfg, weightPath=data, metaPath=coco, showImage=False, makeImageOnly=False, initOnly=False) #default format, i prefer only call the result not to produce image to get more performance

    #until here you will get some data in default mode from alexeyAB, as explain in module.
    #try to: help(scan), explain about the result format of process is: [(item_name, convidence_rate (x_center_image, y_center_image, width_size_box, height_size_of_box))], 
    #to change it with generally used form, like PIL/opencv, do like this below (still in detect function that we create):

    newdata = []
    if len(test) >=2:
        for x in test:
            item, confidence_rate, imagedata = x
            x1, y1, w_size, h_size = imagedata
            x_start = round(x1 - (weight_size/2))
            y_start = round(y1 - (height_size/2))
            x_end = round(x_start + w_size)
            y_end = round(y_start + h_size)
            data = (item, confidence_rate, (x_start, y_start, x_end, y_end), w_size, h_size)
            newdata.append(data)

    elif len(test) == 1:
        item, confidence_rate, imagedata = test
        x1, y1, w_size, h_size = imagedata
        x_start = round(x1 - (w_size/2))
        y_start = round(y1 - (h_size/2))
        x_end = round(x_start + w_size)
        y_end = round(y_start + h_size)
        data = (item, confidence_rate, (x_start, y_start, x_end, y_end), w_size, h_size)
        newdata.append(data)

    else:
        newdata = False

    return newdata

如何使用它:

table = 'D:/test/image/test1.jpg'
checking = detect(table)'

获取坐标:

如果只有1个结果:

x1, y1, x2, y2 = checking[2]

如果结果很多:

for x in checking:
    item = x[0]
    x1, y1, x2, y2 = x[2]
    print(item)
    print(x1, y1, x2, y2)

答案 4 :(得分:0)

灵感来自上述@Wahyu的答案。几乎没有更改,修改和错误修复,并且已通过单对象检测和多对象检测进行了测试。

# calling 'performDetect' function from darknet.py
from darknet import performDetect as scan
import math


def detect(img_path):
    ''' this script if you want only want get the coord '''
    picpath = img_path
    # change this if you want use different config
    cfg = '/home/saggi/Documents/saggi/prabin/darknet/cfg/yolo-obj.cfg'
    coco = '/home/saggi/Documents/saggi/prabin/darknet/obj.data'  # you can change this too
    # and this, can be change by you
    data = '/home/saggi/Documents/saggi/prabin/darknet/backup/yolo-obj_last.weights'
    test = scan(imagePath=picpath, thresh=0.25, configPath=cfg, weightPath=data, metaPath=coco, showImage=False, makeImageOnly=False,
                initOnly=False)  # default format, i prefer only call the result not to produce image to get more performance

    # until here you will get some data in default mode from alexeyAB, as explain in module.
    # try to: help(scan), explain about the result format of process is: [(item_name, convidence_rate (x_center_image, y_center_image, width_size_box, height_size_of_box))],
    # to change it with generally used form, like PIL/opencv, do like this below (still in detect function that we create):

    newdata = []

    # For multiple Detection
    if len(test) >= 2:
        for x in test:
            item, confidence_rate, imagedata = x
            x1, y1, w_size, h_size = imagedata
            x_start = round(x1 - (w_size/2))
            y_start = round(y1 - (h_size/2))
            x_end = round(x_start + w_size)
            y_end = round(y_start + h_size)
            data = (item, confidence_rate,
                    (x_start, y_start, x_end, y_end), (w_size, h_size))
            newdata.append(data)

    # For Single Detection
    elif len(test) == 1:
        item, confidence_rate, imagedata = test[0]
        x1, y1, w_size, h_size = imagedata
        x_start = round(x1 - (w_size/2))
        y_start = round(y1 - (h_size/2))
        x_end = round(x_start + w_size)
        y_end = round(y_start + h_size)
        data = (item, confidence_rate,
                (x_start, y_start, x_end, y_end), (w_size, h_size))
        newdata.append(data)

    else:
        newdata = False

    return newdata


if __name__ == "__main__":
    # Multiple detection image test
    # table = '/home/saggi/Documents/saggi/prabin/darknet/data/26.jpg'
    # Single detection image test
    table = '/home/saggi/Documents/saggi/prabin/darknet/data/1.jpg'
    detections = detect(table)

    # Multiple detection
    if len(detections) > 1:
        for detection in detections:
            print(' ')
            print('========================================================')
            print(' ')
            print('All Parameter of Detection: ', detection)

            print(' ')
            print('========================================================')
            print(' ')
            print('Detected label: ', detection[0])

            print(' ')
            print('========================================================')
            print(' ')
            print('Detected object Confidence: ', detection[1])

            x1, y1, x2, y2 = detection[2]
            print(' ')
            print('========================================================')
            print(' ')
            print(
                'Detected object top left and bottom right cordinates (x1,y1,x2,y2):  x1, y1, x2, y2')
            print('x1: ', x1)
            print('y1: ', y1)
            print('x2: ', x2)
            print('y2: ', y2)

            print(' ')
            print('========================================================')
            print(' ')
            print('Detected object width and height: ', detection[3])
            b_width, b_height = detection[3]
            print('Weidth of bounding box: ', math.ceil(b_width))
            print('Height of bounding box: ', math.ceil(b_height))
            print(' ')
            print('========================================================')

    # Single detection
    else:
        print(' ')
        print('========================================================')
        print(' ')
        print('All Parameter of Detection: ', detections)

        print(' ')
        print('========================================================')
        print(' ')
        print('Detected label: ', detections[0][0])

        print(' ')
        print('========================================================')
        print(' ')
        print('Detected object Confidence: ', detections[0][1])

        x1, y1, x2, y2 = detections[0][2]
        print(' ')
        print('========================================================')
        print(' ')
        print(
            'Detected object top left and bottom right cordinates (x1,y1,x2,y2):  x1, y1, x2, y2')
        print('x1: ', x1)
        print('y1: ', y1)
        print('x2: ', x2)
        print('y2: ', y2)

        print(' ')
        print('========================================================')
        print(' ')
        print('Detected object width and height: ', detections[0][3])
        b_width, b_height = detections[0][3]
        print('Weidth of bounding box: ', math.ceil(b_width))
        print('Height of bounding box: ', math.ceil(b_height))
        print(' ')
        print('========================================================')

# Single detections output:
# test value  [('movie_name', 0.9223029017448425, (206.79859924316406, 245.4672393798828, 384.83673095703125, 72.8630142211914))]

# Multiple detections output:
# test value  [('movie_name', 0.9225175976753235, (92.47076416015625, 224.9121551513672, 147.2491912841797, 42.063255310058594)),
#  ('movie_name', 0.4900225102901459, (90.5261459350586, 12.4061279296875, 182.5990447998047, 21.261077880859375))]

答案 5 :(得分:0)

如果接受的答案对您不起作用,这可能是因为您使用的是 AlexyAB's 暗网模型而不是 pjreddie's 暗网模型。

您只需转到 src 文件夹中的 image_opencv.cpp 文件并取消注释以下部分:

            ...

            //int b_x_center = (left + right) / 2;
            //int b_y_center = (top + bot) / 2;
            //int b_width = right - left;
            //int b_height = bot - top;
            //sprintf(labelstr, "%d x %d - w: %d, h: %d", b_x_center, b_y_center, b_width, b_height);

这将打印 Bbox 中心坐标以及 Bbox 的宽度和高度。进行更改后,请确保在运行 YOLO 之前再次make 暗网。