如何在不使用Kinect SDK功能的情况下将深度空间中的点转换为Kinect中的颜色空间?

时间:2014-01-23 17:16:31

标签: c++ opengl 3d kinect camera-calibration

我正在做一个增强现实应用程序,在用户的彩色视频之上叠加3D对象。使用Kinect 1.7版,虚拟对象的渲染在OpenGL中完成。我已经设法通过使用NuiSensor.h标题中的深度相机的固有常数成功地在深度视频上叠加3D对象,并根据我在http://ksimek.github.io/2013/06/03/calibrated_cameras_in_opengl/上找到的公式计算投影矩阵。使用此投影矩阵渲染的3D对象与深度空间中的2D骨架点完全重叠。这并不奇怪,因为骨架3D点是从深度空间计算的,并且让我相信在Kinect SDK外部计算的投影矩阵可以工作。

以下是一些从内在常数计算投影矩阵的代码及其使用方法:

glm::mat4 GetOpenGLProjectionMatrixFromCameraIntrinsics(float alpha, float beta, float skew, float u0, float v0, 
    int img_width, int img_height, float near_clip, float far_clip ) 
{
    float L = 0;
    float R = (float)img_width;
    float B = 0;
    float T = (float)img_height;
    float N = near_clip;
    float F = far_clip;

    glm::mat4 ortho = glm::mat4(0);
    glm::mat4  proj = glm::mat4(0);

    //Using column major convention 
    ortho[0][0] =  2.0f/(R-L); 
    ortho[0][3] = -(R+L)/(R-L);     
    ortho[1][1] =  2.0f/(T-B); 
    ortho[1][3] = -(T+B)/(T-B);     
    ortho[2][2] = -2.0f/(F-N); 
    ortho[2][3] = -(F+N)/(F-N); 
    ortho[3][3] = 1; 

    proj[0][0] = alpha;     proj[0][1] = skew;  proj[0][2] = -u0;
    proj[1][1] = beta;  proj[1][2] = -v0;
    proj[2][2] = (N+F); proj[2][3] = (N*F);
    proj[3][2] = -1;

    //since glm is row major, we left multiply the two matrices
    //and then transpose the result to pass it to opengl which needs
    //the matrix in column major format
    return glm::transpose(proj*ortho);   
}

//Compute projection matrix of Kinect camera    
    m_3DProjectionMatrix = GetOpenGLProjectionMatrixFromCameraIntrinsics(m_fx, m_fy, m_skew, m_PPx0, m_PPy0, WIN_WIDTH, WIN_HEIGHT, 0.01f, 10);

//where the input variables are 1142.52, 1142.52, 0.00, 640.00, 480.00, 1280, 960 respectively for m_fx, m_fy, m_skew, m_PPx0, m_PPy0, WIN_WIDTH, WIN_HEIGHT. These numbers are derived from NuiImageCamera.h for depth camera.

以下是绘制2D点的方法:

glMatrixMode(GL_PROJECTION);
glLoadIdentity();   
glOrtho(0, WIN_WIDTH, WIN_HEIGHT, 0, 0.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

Draw2DSkeletonRGBPoints();//Uses NuiTransformSkeletonToDepthImage() followed by NuiImageGetColorPixelCoordinatesFromDepthPixel()
Draw2DSkeletonDepthPoints();//Uses NuiTransformSkeletonToDepthImage() only

其次是3D积分:

glMatrixMode(GL_PROJECTION);
glLoadMatrixf(glm::value_ptr(m_3DProjectionMatrix));
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

Draw3DSkeletonPoints();//The Skeleton 3D coordinates from Kinect

然而,在彩色视频之上叠加虚拟对象并不是那么直接。似乎在颜色和深度空间之间存在一些平移,缩放甚至轻微旋转。我知道有一个SDK函数可以将骨架点转换为颜色点,但这不能轻易用于OpenGL渲染;我需要一个变换矩阵,它将Skeleton坐标空间中的3D Skeleton点映射到3D点,并使用Color camera作为原点。有谁知道如何计算这个转换矩阵?我在哪里可以找到有关这样做的更多信息?

0 个答案:

没有答案