使用kinect SDK进行简单的blob检测

时间:2012-09-21 11:18:30

标签: c++ visual-c++ kinect

据我了解官方的Kinect 1.5 SDK,它附带了人脸跟踪和骨架跟踪。简单的斑点检测怎么样?我想做的就是跟踪一个圆形/椭圆形物体。我在SDK中找不到任何代码,所以我应该使用opencv还是其他库吗?

(我的代码是用c ++编写的)

EDIT1 是否可以调整面部追踪器,以便一般(而不是面部)检测圆形?

EDIT2 这是SDK附带的示例中的深度处理代码。如何让OpenCV从中提取blob?

void CDepthBasics::ProcessDepth()
{
    HRESULT hr;
    NUI_IMAGE_FRAME imageFrame;

    // Attempt to get the depth frame
    hr = m_pNuiSensor->NuiImageStreamGetNextFrame(m_pDepthStreamHandle, 0, &imageFrame);
    if (FAILED(hr))
    {
        return;
    }

    INuiFrameTexture * pTexture = imageFrame.pFrameTexture;
    NUI_LOCKED_RECT LockedRect;

    // Lock the frame data so the Kinect knows not to modify it while we're reading it
    pTexture->LockRect(0, &LockedRect, NULL, 0);

    // Make sure we've received valid data
    if (LockedRect.Pitch != 0)
    {
        BYTE * rgbrun = m_depthRGBX;
        const USHORT * pBufferRun = (const USHORT *)LockedRect.pBits;

        // end pixel is start + width*height - 1
        const USHORT * pBufferEnd = pBufferRun + (cDepthWidth * cDepthHeight);

        while ( pBufferRun < pBufferEnd )
        {
            // discard the portion of the depth that contains only the player index
            USHORT depth = NuiDepthPixelToDepth(*pBufferRun);

            // to convert to a byte we're looking at only the lower 8 bits
            // by discarding the most significant rather than least significant data
            // we're preserving detail, although the intensity will "wrap"
            BYTE intensity = static_cast<BYTE>(depth % 256);

            // Write out blue byte
            *(rgbrun++) = intensity

            // Write out green byte
            *(rgbrun++) = intensity;

            // Write out red byte
            *(rgbrun++) = intensity;

            // We're outputting BGR, the last byte in the 32 bits is unused so skip it
            // If we were outputting BGRA, we would write alpha here.
            ++rgbrun;

            // Increment our index into the Kinect's depth buffer
            ++pBufferRun;

        }

        // Draw the data with Direct2D
        m_pDrawDepth->Draw(m_depthRGBX, cDepthWidth * cDepthHeight * cBytesPerPixel);
    }

    // We're done with the texture so unlock it
    pTexture->UnlockRect(0);

    // Release the frame
    m_pNuiSensor->NuiImageStreamReleaseFrame(m_pDepthStreamHandle, &imageFrame);
}

2 个答案:

答案 0 :(得分:1)

你可以使用它,因为OpenCV本身不支持处理blob:

http://opencv.willowgarage.com/wiki/cvBlobsLib

答案 1 :(得分:1)

从Kinect获得图像后,您可以随意使用任何图像处理库。

您可以使用OpenCV的Hough Circle Transform来检测圈子。您可能需要先将Kinect图像格式转换为cv :: Mat。

我认为OpenCV不是唯一具有该功能的库。如果您有兴趣,请查看Hough Transforms。

我不认为调整面部跟踪器是可行的方法。