从Kinect相机而不是默认的网络摄像头读取以检测框架中的对象

时间:2019-06-26 20:58:39

标签: python opencv kinect webcam video-capture

我发现了两个示例代码,可以完成我要串联使用的单独任务。

“拳头”代码从我的笔记本电脑打开网络摄像头,并读取视频流以检测框架中的特定彩色对象。然后,它会创建圆形对象的轮廓,并在其实时移动时为其先前位置创建彩色轨迹。

唯一的问题是我试图使用Xbox 360 Kinect作为网络摄像头而不是笔记本电脑上的内置网络摄像头。 (将来我也计划使用深度相机,这就是为什么我想使用kinect相机。)

第二个代码显示了如何打开和查看Kinect摄像机的视频流。

我发现将VideoStream(src= 0).start()中的数字设置为0是默认相机。如果我将该值更改为1、2、3或其他值,它应该读取下一个可用的相机。但是,当我打印所有可用的摄像机时,它仅显示列出的网络摄像机。

我已经删除,重新安装并安装了所有正确的驱动程序和软件包,如果我只是在该行代码中放置#1而没有运气的话,我需要它才能正常工作。必须有另一种方法可以解决此问题。

---------第一个代码----------------------------------- -------------

# import the necessary packages
from collections import deque
from imutils.video import VideoStream
import numpy as np
import argparse
import cv2
import imutils
import time

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-v", "--video",
    help="path to the (optional) video file")
ap.add_argument("-b", "--buffer", type=int, default=64,
    help="max buffer size")
args = vars(ap.parse_args())

# define the lower and upper boundaries of the "green"
# ball in the HSV color space, then initialize the
# list of tracked points
greenLower = (53, 36, 124)
greenUpper = (200, 200, 242)
pts = deque(maxlen=args["buffer"])

# if a video path was not supplied, grab the reference
# to the webcam
if not args.get("video", False): #if not video file was given
    vs = VideoStream(src=0).start() #access the webcam here

# otherwise, grab a reference to the video file
else:
    vs = cv2.VideoCapture(args["video"])

# allow the camera or video file to warm up
time.sleep(2.0)

# keep looping
while True:
    # grab the current frame
    frame = vs.read()

    # handle the frame from VideoCapture or VideoStream
    frame = frame[1] if args.get("video", False) else frame

    # if we are viewing a video and we did not grab a frame,
    # then we have reached the end of the video
    if frame is None:
        break

    # resize the frame, blur it, and convert it to the HSV
    # color space
    frame = imutils.resize(frame, width=600)
    blurred = cv2.GaussianBlur(frame, (11, 11), 0)
    hsv = cv2.cvtColor(blurred, cv2.COLOR_BGR2HSV)

    # construct a mask for the color "green", then perform
    # a series of dilations and erosions to remove any small
    # blobs left in the mask
    mask = cv2.inRange(hsv, greenLower, greenUpper)
    mask = cv2.erode(mask, None, iterations=2)
    mask = cv2.dilate(mask, None, iterations=2)

    # find contours in the mask and initialize the current
    # (x, y) center of the ball
    cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    cnts = imutils.grab_contours(cnts)
    center = None

    # only proceed if at least one contour was found
    if len(cnts) > 0:
        # find the largest contour in the mask, then use
        # it to compute the minimum enclosing circle and
        # centroid
        c = max(cnts, key=cv2.contourArea)
        ((x, y), radius) = cv2.minEnclosingCircle(c)
        M = cv2.moments(c)
        center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))

        # only proceed if the radius meets a minimum size
        if radius > 10:
            # draw the circle and centroid on the frame,
            # then update the list of tracked points
            cv2.circle(frame, (int(x), int(y)), int(radius),
                (0, 255, 255), 2)
            cv2.circle(frame, center, 5, (0, 0, 255), -1)

    # update the points queue
    pts.appendleft(center)
        # loop over the set of tracked points
    for i in range(1, len(pts)):
        # if either of the tracked points are None, ignore
        # them
        if pts[i - 1] is None or pts[i] is None:
            continue

        # otherwise, compute the thickness of the line and
        # draw the connecting lines
        thickness = int(np.sqrt(args["buffer"] / float(i + 1)) * 2.5)
        cv2.line(frame, pts[i - 1], pts[i], (0, 0, 255), thickness)

    # show the frame to our screen
    cv2.imshow("Frame", frame)
    key = cv2.waitKey(1) & 0xFF

    # if the 'q' key is pressed, stop the loop
    if key == ord("q"):
        cv2.destroyAllWindows()
        break


# close all windows
cv2.destroyAllWindows()

#----------End of the First code-----------------------------------



#------------Second Code-------------------------------------------

from pykinect import nui
import numpy
import cv2

def video_handler_function(frame):

    video = numpy.empty((480,640,4),numpy.uint8)
    frame.image.copy_bits(video.ctypes.data)

    cv2.imshow('KINECT Video Stream', video)


kinect = nui.Runtime()
kinect.video_frame_ready += video_handler_function
kinect.video_stream.open(nui.ImageStreamType.Video, 2,nui.ImageResolution.Resolution640x480,nui.ImageType.Color)

cv2.namedWindow('KINECT Video Stream', cv2.WINDOW_AUTOSIZE)

while True:

    key = cv2.waitKey(1)
    if key == 27: break

kinect.close()
cv2.destroyAllWindows()

#----------end of the second code---------------------------------

```python

When I change the value, to 1 which is the port number that the kinect is connected to, it should open the video stream and have the same results as the first code, but it just closes the python app.

0 个答案:

没有答案