估算相机姿势时,请考虑已知值

时间:2019-02-03 19:25:08

标签: java opencv vision pose-estimation opencv-solvepnp

我正在尝试相对于4个已知的世界坐标来估计相机的姿态。由于我系统的限制,相机姿势的一些细节是已知的并已修复。即,其垂直偏移,俯仰和横滚是已知常数。我想知道如何利用这些信息来改善OpenCV的SolvePNP算法的结果。

目前,我发现没有这些信息,图像点的最细微变化都可能导致结果发生巨大变化。例如,我将相机放置在已知姿势:

X = 2ft
Y = 1ft
Z = 5ft
ROLL = 0 degrees
PITCH = 180 degrees
YAW = 0 degrees

然后,我让相机跟踪4个图像点并计算出姿势,我得到以下信息:

 {48.0, 138.0} 
 {40.0, 136.0} 
 {45.0, 114.0} 
 {54.0, 114.0} 
 X = 2.4235989629072314 
 Y = 1.2370888865388812 
 Z = 4.717115774644273 
 ROLL = -7.555688896466208 
 PITCH = 165.9771402205544 
 YAW = 1.5292313860396367 
 ============================= 
 {48.0, 138.0} 
 {40.0, 136.0} 
 {45.0, 114.0} 
 {53.0, 114.0} 
 X = 2.864381855099463 
 Y = 0.9925235082316144 
 Z = 4.605675917036408 
 ROLL = -7.962130849477691 
 PITCH = 168.14583005865828 
 YAW = 6.697852245666419 
 ============================= 
 {48.0, 137.0} 
 {40.0, 136.0} 
 {46.0, 112.0} 
 {53.0, 114.0} 
 X = -3.3067589122064986 
 Y = -0.2727418953073936 
 Z = 4.393018415532629 
 ROLL = -6.929120013468928 
 PITCH = -168.6014586711855 
 YAW = -59.587627235667476 
public VisionProcessor() {
    // Define bottom right corner of left vision target as origin
    mObjectPoints = new MatOfPoint3f(
        new Point3(0.0, 0.0, 0.0),        // bottom right
        new Point3(-1.9363, 0.5008, 0.0), // bottom left
        new Point3(-0.5593, 5.8258, 0.0), // top-left
        new Point3(1.377, 5.325, 0.0)     // top-right
    );

    mCameraMatrix = Mat.eye(3, 3, CvType.CV_64F);
    mCameraMatrix.put(0, 0, 2.5751292067328632e+02);
    mCameraMatrix.put(0, 2, 1.5971077914723165e+02);
    mCameraMatrix.put(1, 1, 2.5635071715912881e+02);
    mCameraMatrix.put(1, 2, 1.1971433393615548e+02);

    mDistortionCoefficients = new MatOfDouble(
        2.9684613693070039e-01, 
        -1.4380252254747885e+00,
        -2.2098421479494509e-03, 
        -3.3894563533907176e-03, 
        2.5344430354806740e+00
    );
}

public void update(double[] cornX, double[] cornY) {
    MatOfPoint2f imagePoints = new MatOfPoint2f(
        mPointFinder.getBottomRight(), 
        mPointFinder.getBottomLeft(),
        mPointFinder.getTopLeft(), 
        mPointFinder.getTopRight()
    );

    Mat rotationVector = new MatOfDouble(Math.PI, 0, 0);
    Mat translationVector = new MatOfDouble(-24, 0, 60);
    Calib3d.solvePnP(mObjectPoints, imagePoints, mCameraMatrix, mDistortionCoefficients,
                     rotationVector, translationVector);

    Mat rotationMatrix = new Mat();
    Calib3d.Rodrigues(rotationVector, rotationMatrix);

    Mat projectionMatrix = new Mat(3, 4, CvType.CV_64F);
    projectionMatrix.put(0, 0,
        rotationMatrix.get(0, 0)[0], rotationMatrix.get(0, 1)[0], rotationMatrix.get(0, 2)[0], translationVector.get(0, 0)[0],
        rotationMatrix.get(1, 0)[0], rotationMatrix.get(1, 1)[0], rotationMatrix.get(1, 2)[0], translationVector.get(1, 0)[0],
        rotationMatrix.get(2, 0)[0], rotationMatrix.get(2, 1)[0], rotationMatrix.get(2, 2)[0], translationVector.get(2, 0)[0]
    );

    Mat cameraMatrix = new Mat();
    Mat rotMatrix = new Mat();
    Mat transVect = new Mat();
    Mat rotMatrixX = new Mat();
    Mat rotMatrixY = new Mat();
    Mat rotMatrixZ = new Mat(); 
    Mat eulerAngles = new Mat();
    Calib3d.decomposeProjectionMatrix(projectionMatrix, cameraMatrix, rotMatrix, transVect, rotMatrixX, rotMatrixY, rotMatrixZ, eulerAngles);

    System.out.println("X = " + translationVector.get(0,0)[0] / 12.0);
    System.out.println("Y = " + translationVector.get(1,0)[0] / 12.0);
    System.out.println("Z = " + translationVector.get(2,0)[0] / 12.0);
    System.out.println("ROLL = " + eulerAngles.get(2,0)[0]);
    System.out.println("PITCH = " + eulerAngles.get(0,0)[0]);
    System.out.println("YAW = " + eulerAngles.get(1,0)[0]);
    System.out.println("=============================")
}

我希望系统的输出非常接近现实世界的位置,但是数据显示图像点的最小变化会极大地影响最终的姿态。

0 个答案:

没有答案