在OpenGL中使用OpenCV solvePnP增强现实

时间:2017-06-01 17:25:31

标签: android opencv opengl-es

我尝试使用BoofCV(OpenCV替代Java)和OpenGL ES 2.0在Android中构建增强现实应用程序。我有一个标记,我可以得到图像点和"世界到凸轮"使用BoofCV的solvePnP函数进行转换。我希望能够使用OpenGL在3D中绘制标记。这就是我到目前为止所拥有的:

在相机的每一帧上,我都会拨打solvePnP

Se3_F64 worldToCam = MathUtils.worldToCam(__qrWorldPoints, imagePoints);
mGLAssetSurfaceView.setWorldToCam(worldToCam);

这就是我所定义的世界点

static float qrSideLength = 79.365f; // mm

private static final double[][] __qrWorldPoints = {
        {qrSideLength * -0.5, qrSideLength * 0.5, 0},
        {qrSideLength * -0.5, qrSideLength * -0.5, 0},
        {qrSideLength * 0.5, qrSideLength * -0.5, 0},
        {qrSideLength * 0.5, qrSideLength * 0.5, 0}
};

我给它喂食一个正中心的正方形,边长为毫米。

我可以确认从solvePnP返回的旋转矢量和平移矢量是合理的,所以我不知道这里是否存在问题。

我将solvePnP的结果传递给我的渲染器

public void setWorldToCam(Se3_F64 worldToCam) {

    DenseMatrix64F _R = worldToCam.R;
    Vector3D_F64 _T = worldToCam.T;

    // Concatenating the the rotation and translation vector into
    // a View matrix
    double[][] __view = {
        {_R.get(0, 0), _R.get(0, 1), _R.get(0, 2), _T.getX()},
        {_R.get(1, 0), _R.get(1, 1), _R.get(1, 2), _T.getY()},
        {_R.get(2, 0), _R.get(2, 1), _R.get(2, 2), _T.getZ()},
            {0, 0, 0, 1}
    };

    DenseMatrix64F _view = new DenseMatrix64F(__view);

    // Matrix to convert from BoofCV (OpenCV) coordinate system to OpenGL coordinate system
    double[][] __cv_to_gl = {
            {1, 0, 0, 0},
            {0, -1, 0, 0},
            {0, -1, 0, 0},
            {0, 0, 0, 1}
    };

    DenseMatrix64F _cv_to_gl = new DenseMatrix64F(__cv_to_gl);

    // Multiply the View Matrix by the BoofCV to OpenGL matrix to apply the coordinate transform
    DenseMatrix64F view = new SimpleMatrix(__view).mult(new SimpleMatrix(__cv_to_gl)).getMatrix();

    // BoofCV stores matrices in row major order, but OpenGL likes column major order
    // I transpose the view matrix and get a flattened list of 16,
    // Then I convert them to floating point
    double[] viewd = new SimpleMatrix(view).transpose().getMatrix().getData();

    for (int i = 0; i < mViewMatrix.length; i++) {
        mViewMatrix[i] = (float) viewd[i];
    }
}

我还使用从相机校准中获取的相机内在函数来输入OpenGL的投影矩阵

@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {

    // this projection matrix is applied to object coordinates
    // in the onDrawFrame() method

    double fx = MathUtils.fx;
    double fy = MathUtils.fy;
    float fovy = (float) (2 * Math.atan(0.5 * height / fy) * 180 / Math.PI);
    float aspect = (float) ((width * fy) / (height * fx));

    // be careful with this, it could explain why you don't see certain objects
    float near = 0.1f;
    float far = 100.0f;

    Matrix.perspectiveM(mProjectionMatrix, 0, fovy, aspect, near, far);

    GLES20.glViewport(0, 0, width, height);

}

正方形I&#39; m绘图是此Google example中定义的那个。

@Override
public void onDrawFrame(GL10 gl) {

    // redraw background color
    GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);

    // Set the camera position (View matrix)
    // Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);


    // Combine the rotation matrix with the projection and camera view
    // Note that the mMVPMatrix factor *must be the first* in order
    // for matrix multiplication product to be correct

    // Calculate the projection and view transformation
    Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);

    // Draw shape
    mSquare.draw(mMVPMatrix);
}

我认为这个问题与Google的示例代码中的广场定义并未考虑现实世界的长度这一事实有关。据我所知,OpenGL坐标系有角(-1,1),(-1,-1),(-1,1),(1,1),它们与毫米对象点I不对应已定义用于BoofCV,即使它们的顺序正确。

static float squareCoords[] = {
        -0.5f,  0.5f, 0.0f,   // top left
        -0.5f, -0.5f, 0.0f,   // bottom left
        0.5f, -0.5f, 0.0f,   // bottom right
        0.5f,  0.5f, 0.0f }; // top right

0 个答案:

没有答案