如何使用 GPU 运行经过训练的 Yolov3 模型

时间:2021-07-12 08:41:40

标签: python tensorflow object-detection yolo custom-object

我是 DL/实时对象检测领域的新手,我想从 youtube 上学习一些东西。我在 youtube 上观看了关于在 yolov3 上实时自定义对象检测的视频 https://www.youtube.com/watch?v=DLngCtsG3bk 并且我正确地完成了所有步骤。当我在笔记本电脑上运行对象检测 python 文件时,它在 CPU 上运行,我只能接收 2-3 fps。请谁能告诉我如何使用我的 GPU 来运行它。我得到的文件 yolov3_training_last.weightsyolov3_testing.cfgclasses.txt。 python 文件包含在下面,当我运行它时,我说我得到了低 fps。如果有什么方法可以用GPU来增加它,请教我。提前致谢。

Python 文件:

import cv2
import numpy as np
import time

net = cv2.dnn.readNet('yolov3_training_last.weights', 'yolov3_testing.cfg')

classes = []
with open("classes.txt", "r") as f:
    classes = f.read().splitlines()

cap = cv2.VideoCapture(0)
font = cv2.FONT_HERSHEY_PLAIN
colors = np.random.uniform(0, 255, size=(100, 3))

# used to record the time when we processed last frame
prev_frame_time = 0
 
# used to record the time at which we processed current frame
new_frame_time = 0

while True:
    _, img = cap.read()
    height, width, _ = img.shape

    blob = cv2.dnn.blobFromImage(img, 1/255, (416, 416), (0,0,0), swapRB=True, crop=False)
    net.setInput(blob)
    output_layers_names = net.getUnconnectedOutLayersNames()
    layerOutputs = net.forward(output_layers_names)

    boxes = []
    confidences = []
    class_ids = []

    for output in layerOutputs:
        for detection in output:
            scores = detection[5:]
            class_id = np.argmax(scores)
            confidence = scores[class_id]
            if confidence > 0.95:
                center_x = int(detection[0]*width)
                center_y = int(detection[1]*height)
                w = int(detection[2]*width)
                h = int(detection[3]*height)

                x = int(center_x - w/2)
                y = int(center_y - h/2)

                boxes.append([x, y, w, h])
                confidences.append((float(confidence)))
                class_ids.append(class_id)

    indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.2, 0.4)

    if len(indexes)>0:
        for i in indexes.flatten():
            x, y, w, h = boxes[i]
            crop_img = img[y:y+h, x:x+w]
            (B, G, R) = [int(x) for x in cv2.mean(crop_img)[:3]]
            roi_face= img[y: y+ h, x:x+w]
            roi_face = cv2.blur(roi_face,(20,20))
            img[y: y+ h, x: x+ w]=[0,0,0]
            img[y: y+ h, x: x+ w]=cv2.add(roi_face, img[y: y+ h, x: x+ w])
            label = str(classes[class_ids[i]])
            confidence = str(round(confidences[i],2))
            color = colors[i]
            cv2.rectangle(img, (x,y), (x+w, y+h), color, 2)
            cv2.putText(img, label + " " + confidence, (x, y+20), font, 1, (255,255,255), 2)

    
    font = cv2.FONT_HERSHEY_SIMPLEX
    # time when we finish processing for this frame
    new_frame_time = time.time()
 
    # Calculating the fps
 
    # fps will be number of frame processed in given time frame
    # since their will be most of time error of 0.001 second
    # we will be subtracting it to get more accurate result
    fps = 1/(new_frame_time-prev_frame_time)
    prev_frame_time = new_frame_time
 
    # converting the fps into integer
    fps = int(fps)
 
    # converting the fps to string so that we can display it on frame
    # by using putText function
    fps = str(fps)
 
    # putting the FPS count on the frame
    cv2.putText(img, fps, (7, 70), font, 3, (100, 255, 0), 3, cv2.LINE_AA)

    cv2.imshow('Image', img)
    key = cv2.waitKey(1)
    if key==27:
        break

cap.release()
cv2.destroyAllWindows()

1 个答案:

答案 0 :(得分:0)

我认为您需要安装支持 GPU 的 Tensorflow 版本。然后,您必须指定要使用的设备,您的 CPU 或 GPU。

此外,您的 GPU 必须能够运行该程序。所以你需要安装相关的库。如果你有 AMD GPU,那会很难,因为我猜他们不支持 Tensorflow GPU。至于 Nvidia 有它的驱动程序和库。

您的另一个选择是使用 Pytorch,您也需要使用正确的硬件和软件。

另一种方法是使用 KerasPlaidML

从头开始​​创建模型