深度到世界使用OpenN和KinectI进行坐标转换

时间:2016-07-05 20:42:13

标签: c++ kinect openni

我有一个问题,关于如何解释使用Microsoft Kinect传感器和OpenNI从深度到真实世界坐标转换所给出的结果。使用Visual Studio 2015 C ++,OpenNI 2.0和OpenCV 2.4

图书馆工作正常,我可以获得深度图,然后在特定像素中使用“openni :: CoordinateConverter :: convertDepthToWorld”并获得其x,y,z世界坐标。

令人困惑的是,当我选择两个像素并获得(水平距离)它们之间的x距离时,该值根据这些像素的深度而不同。 例如,我将Kinect传感器指向实心墙,并在其上销售两个参考点。然后我转到Kinect的深度视图,我定位这两个点(像素位置),我提取x,y,z坐标。但是,如果Kinect传感器距离墙壁700mm,则这两个点之间的x距离不同于我将Kinect 1000mm从墙壁上移开的值。

所以问题是,世界上两个固定点之间的距离是否与深度一起变化是正确的?当我向前或向后移动传感器(到墙上)时,我期待它们之间的距离是相同的。

感谢。

try {
    openni::OpenNI::initialize();

    openni::Device device;
    auto ret = device.open(openni::ANY_DEVICE);
    if (ret != openni::STATUS_OK) {
        throw std::runtime_error("");
    }

    openni::VideoStream depthStream;
    depthStream.create(device, openni::SensorType::SENSOR_DEPTH);
    depthStream.start();



    openni::VideoStream colorStream;
    colorStream.create(device, openni::SensorType::SENSOR_COLOR);
    colorStream.start();


    device.setDepthColorSyncEnabled(true);
    device.setImageRegistrationMode(openni::IMAGE_REGISTRATION_DEPTH_TO_COLOR);

    cv::Mat depthImage;
    cv::Mat colorImage;
    cv::Mat dst_image;

    float error;




    cv::Mat ColorFrame;



    while (1) {

        // Grab Frames
        openni::VideoFrameRef depthFrame;
        depthStream.readFrame(&depthFrame);

        openni::VideoFrameRef initialcolorFrame;
        colorStream.readFrame(&initialcolorFrame);

      // Create a RGB frame from the Color Stream frame

        const openni::RGB888Pixel* imageBuffer = (const openni::RGB888Pixel*)initialcolorFrame.getData();

        ColorFrame.create(initialcolorFrame.getHeight(), initialcolorFrame.getWidth(), CV_8UC3);
        memcpy(ColorFrame.data, imageBuffer, 3 * initialcolorFrame.getHeight()*initialcolorFrame.getWidth() * sizeof(uint8_t));

        cv::cvtColor(ColorFrame, ColorFrame, CV_BGR2RGB); //this will put colors right



        if (depthFrame.isValid()) {

                depthImage = cv::Mat(depthStream.getVideoMode().getResolutionY(),depthStream.getVideoMode().getResolutionX(),
                CV_16U, (char*)depthFrame.getData());

        }




    //Get World Coordinates of two points.
        float x = 0;
        float y = 0;
        float z = 0;

        float xx = 0;
        float yy = 0;
        float zz = 0;



        //Point 1 ( camera image center )
       openni::CoordinateConverter::convertDepthToWorld(depthStream, 320, 240, depthImage.at<openni::DepthPixel>(320, 240),&x,&y,&z);


        //Second point

        openni::CoordinateConverter::convertDepthToWorld(depthStream, 200, 220, depthImage.at<openni::DepthPixel>(200, 220), &xx, &yy, &zz);


    //Draw where these points are.

        cv::circle(ColorFrame, cv::Point(320, 240), 5, cv::Scalar(255, 0, 0), -1, CV_AA);
        cv::circle(ColorFrame, cv::Point(200, 220), 5, cv::Scalar(0, 255, 0), -1, CV_AA);



        // Get the depth of these two points
        auto videoMode = depthStream.getVideoMode();

        int centerX = 320;
        int centerY = 240;
        int centerXX = 200;
        int centerYY = 220;
        int centerIndexX = (centerY * videoMode.getResolutionX()) + centerX;
        int centerIndexXX = (centerYY * videoMode.getResolutionX()) + centerXX;



        short* depth = (short*)depthFrame.getData();


        //Write x,y,z world coordinates into the screen
        std::stringstream ss;
        ss << "CENTER :320, 240 :" << std::endl ;
        cv::putText(ColorFrame,
            ss.str(),
            cv::Point(0, 50),
            cv::FONT_HERSHEY_SIMPLEX,
            1,
            cv::Scalar(255, 0, 0),
            1);

        std::stringstream ss2;
        ss2 << "x[mm] :" << x << " y[mm] :" << y << " z[mm] :" << depth[centerIndexX];

        cv::putText(ColorFrame,
            ss2.str(),
            cv::Point(0, 100),
            cv::FONT_HERSHEY_SIMPLEX,
            0.8,
            cv::Scalar(0, 0, 255),
            1);




        std::stringstream ss3;
        ss3 << "Green Point: 200, 220 :" << std::endl;
        cv::putText(ColorFrame,
            ss3.str(),
            cv::Point(0, 150),
            cv::FONT_HERSHEY_SIMPLEX,
            1.0,
            cv::Scalar(0, 255, 0),
            1);

        std::stringstream ss4;
        ss4 << "x[mm] :" << xx << " y[mm] :" << yy << " z[mm] :" << depth[centerIndexXX];

        cv::putText(ColorFrame,
            ss4.str(),
            cv::Point(0, 200),
            cv::FONT_HERSHEY_SIMPLEX,
            0.8,
            cv::Scalar(0, 255, 0),
            1);


        cv::imshow("Depth Camera", dst_image);
        cv::imshow("RGB Camera", ColorFrame);



        int key = cv::waitKey(10);
        if (key == 'q') {
            break;
        }
    }
}
catch (std::exception&) {
    std::cout << openni::OpenNI::getExtendedError() << std::endl;
}

}

0 个答案:

没有答案