这个网络摄像头人脸检测有什么问题?

时间:2016-08-05 06:05:08

标签: python opencv webcam dlib

Dlib有一个非常方便,快速和有效的物体检测程序,我想制作一个酷炫的脸部跟踪示例,类似于示例here

广泛支持的OpenCV具有相当快的VideoCapture模块(快照为五分之一秒,而1秒或更长时间用于调用唤醒网络摄像头并获取图片的程序)。我将它添加到Dlib中的面部检测器Python示例中。

如果您直接显示和处理OpenCV VideoCapture输出,它看起来很奇怪,因为显然OpenCV存储BGR而不是RGB顺序。调整后,它可以工作,但很慢:

from __future__ import division
import sys

import dlib
from skimage import io


detector = dlib.get_frontal_face_detector()
win = dlib.image_window()

if len( sys.argv[1:] ) == 0:
    from cv2 import VideoCapture
    from time import time

    cam = VideoCapture(0)  #set the port of the camera as before

    while True:
        start = time()
        retval, image = cam.read() #return a True bolean and and the image if all go right

        for row in image:
            for px in row:
                #rgb expected... but the array is bgr?
                r = px[2]
                px[2] = px[0]
                px[0] = r
        #import matplotlib.pyplot as plt
        #plt.imshow(image)
        #plt.show()

        print( "readimage: " + str( time() - start ) )

        start = time()
        dets = detector(image, 1)
        print "your faces: %f" % len(dets)
        for i, d in enumerate( dets ):
            print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
                i, d.left(), d.top(), d.right(), d.bottom()))
            print("from left: {}".format( ( (d.left() + d.right()) / 2 ) / len(image[0]) ))
            print("from top: {}".format( ( (d.top() + d.bottom()) / 2 ) /len(image)) )
        print( "process: " + str( time() - start ) )

        start = time()
        win.clear_overlay()
        win.set_image(image)
        win.add_overlay(dets)

        print( "show: " + str( time() - start ) )
        #dlib.hit_enter_to_continue()



for f in sys.argv[1:]:
    print("Processing file: {}".format(f))
    img = io.imread(f)
    # The 1 in the second argument indicates that we should upsample the image
    # 1 time.  This will make everything bigger and allow us to detect more
    # faces.
    dets = detector(img, 1)
    print("Number of faces detected: {}".format(len(dets)))
    for i, d in enumerate(dets):
        print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
            i, d.left(), d.top(), d.right(), d.bottom()))

    win.clear_overlay()
    win.set_image(img)
    win.add_overlay(dets)
    dlib.hit_enter_to_continue()


# Finally, if you really want to you can ask the detector to tell you the score
# for each detection.  The score is bigger for more confident detections.
# Also, the idx tells you which of the face sub-detectors matched.  This can be
# used to broadly identify faces in different orientations.
if (len(sys.argv[1:]) > 0):
    img = io.imread(sys.argv[1])
    dets, scores, idx = detector.run(img, 1)
    for i, d in enumerate(dets):
        print("Detection {}, score: {}, face_type:{}".format(
            d, scores[i], idx[i]))

从这个程序的时间输出来看,似乎处理和抓取图片每个都需要五分之一秒,所以你会认为它应该每秒显示一次或两次更新 - 但是,如果你举手它会在5秒钟后显示在网络摄像头视图中!

是否存在某种内部缓存,使其无法抓取最新的网络摄像头图像?我可以调整或多线程网络摄像头输入过程来解决滞后问题吗?这是在带有16GB RAM的Intel i5上。

更新

根据这里,它建议读取逐帧抓取视频 。这可以解释它抓住下一帧和下一帧,直到它最终赶上了处理过程中抓取的所有帧。我想知道是否有一个选项来设置帧速率或将其设置为丢帧,只需点击网络摄像头中的脸部图片现在读取? http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html#capture-video-from-camera

4 个答案:

答案 0 :(得分:2)

我感觉到你的痛苦。我实际上最近使用过该网络摄像头脚本(多次迭代;大幅编辑)。我认为,我的工作非常好。所以你可以看到我做了什么,我创建了一个GitHub Gist的详细信息(代码; HTML自述文件;示例输出):

https://gist.github.com/victoriastuart/8092a3dd7e97ab57ede7614251bf5cbd

答案 1 :(得分:1)

问题可能是设置了阈值。 如上所述here

dots = detector(frame, 1)

应改为

dots = detector(frame)

避免达到阈值。 这对我有用,但与此同时,存在帧处理速度过快的问题。

答案 2 :(得分:0)

如果要在OpenCV中显示读取的帧,可以在cv2.imshow()函数的帮助下完成,而无需更改颜色顺序。另一方面,如果您仍想在matplotlib中显示图片,那么您无法避免使用这样的方法:

b,g,r = cv2.split(img)
img = cv2.merge((b,g,r))

这是我现在唯一可以帮助你的事情=)

答案 3 :(得分:0)

我尝试了多线程,它也一样慢,然后我在线程中只有.read()多线程,没有处理,没有线程锁定,并且它工作得非常快 - 可能是1秒左右的延迟,不是3或5.请参阅http://www.pyimagesearch.com/2015/12/21/increasing-webcam-fps-with-python-and-opencv/

from __future__ import division
import sys
from time import time, sleep
import threading

import dlib
from skimage import io


detector = dlib.get_frontal_face_detector()
win = dlib.image_window()

class webCamGrabber( threading.Thread ):
    def __init__( self ):
        threading.Thread.__init__( self )
        #Lock for when you can read/write self.image:
        #self.imageLock = threading.Lock()
        self.image = False

        from cv2 import VideoCapture, cv
        from time import time

        self.cam = VideoCapture(0)  #set the port of the camera as before
        #self.cam.set(cv.CV_CAP_PROP_FPS, 1)


    def run( self ):
        while True:
            start = time()
            #self.imageLock.acquire()
            retval, self.image = self.cam.read() #return a True bolean and and the image if all go right

            print( type( self.image) )
            #import matplotlib.pyplot as plt
            #plt.imshow(image)
            #plt.show()

            #print( "readimage: " + str( time() - start ) )
            #sleep(0.1)

if len( sys.argv[1:] ) == 0:

    #Start webcam reader thread:
    camThread = webCamGrabber()
    camThread.start()

    #Setup window for results
    detector = dlib.get_frontal_face_detector()
    win = dlib.image_window()

    while True:
        #camThread.imageLock.acquire()
        if camThread.image is not False:
            print( "enter")
            start = time()

            myimage = camThread.image
            for row in myimage:
                for px in row:
                    #rgb expected... but the array is bgr?
                    r = px[2]
                    px[2] = px[0]
                    px[0] = r


            dets = detector( myimage, 0)
            #camThread.imageLock.release()
            print "your faces: %f" % len(dets)
            for i, d in enumerate( dets ):
                print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
                    i, d.left(), d.top(), d.right(), d.bottom()))
                print("from left: {}".format( ( (d.left() + d.right()) / 2 ) / len(camThread.image[0]) ))
                print("from top: {}".format( ( (d.top() + d.bottom()) / 2 ) /len(camThread.image)) )
            print( "process: " + str( time() - start ) )

            start = time()
            win.clear_overlay()
            win.set_image(myimage)
            win.add_overlay(dets)

            print( "show: " + str( time() - start ) )
            #dlib.hit_enter_to_continue()



for f in sys.argv[1:]:
    print("Processing file: {}".format(f))
    img = io.imread(f)
    # The 1 in the second argument indicates that we should upsample the image
    # 1 time.  This will make everything bigger and allow us to detect more
    # faces.
    dets = detector(img, 1)
    print("Number of faces detected: {}".format(len(dets)))
    for i, d in enumerate(dets):
        print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
            i, d.left(), d.top(), d.right(), d.bottom()))

    win.clear_overlay()
    win.set_image(img)
    win.add_overlay(dets)
    dlib.hit_enter_to_continue()


# Finally, if you really want to you can ask the detector to tell you the score
# for each detection.  The score is bigger for more confident detections.
# Also, the idx tells you which of the face sub-detectors matched.  This can be
# used to broadly identify faces in different orientations.
if (len(sys.argv[1:]) > 0):
    img = io.imread(sys.argv[1])
    dets, scores, idx = detector.run(img, 1)
    for i, d in enumerate(dets):
        print("Detection {}, score: {}, face_type:{}".format(
            d, scores[i], idx[i]))