SceneKit 3D Marker增强现实iOS

时间:2017-05-30 08:48:54

标签: ios objective-c swift augmented-reality scenekit

最近几周我一直致力于开发一个简单的概念验证应用程序,其中在IOS中使用特定的增强现实标记(在我的情况下,我使用Aruco标记)投影3D模型(使用Swift和目标-C)

我校准了具有特定固定镜头位置的Ipad相机,并使用它来估计AR标记的姿势(从我的调试分析看起来非常准确)。当我尝试使用SceneKit场景在标记上投影模型时,问题似乎(惊讶,惊讶)。

我知道opencv和SceneKit中的轴是不同的(Y和Z),并且已经完成了这个校正以及两个库之间的行顺序/列顺序差异。

在构建投影矩阵之后,我将相同的变换应用于3D模型,并且从我的调试分析中,对象似乎被转换到期望的位置并具有期望的旋转。问题是它永远不会与标记的特定图像像素位置重叠。我正在使用AVCapturePreviewVideoLayer将视频放在与SceneKit View具有相同边界的背景中。

有没有人知道为什么会这样?我尝试使用相机FOV,但对结果没有实际影响。

谢谢大家的时间。

EDIT1:我会在这里发布一些代码来揭示我目前正在做的事情。

我在主视图中有两个子视图,一个是背景AVCaptureVideoPreviewLayer,另一个是SceneKitView。两者都与主视图具有相同的边界。

在每一帧我使用opencv包装器输出每个标记的姿势:

    std::vector<int> ids;
    std::vector<std::vector<cv::Point2f>> corners, rejected;

    cv::aruco::detectMarkers(frame, _dictionary, corners, ids, _detectorParams, rejected);
    if (ids.size() > 0 ){
    cv::aruco::drawDetectedMarkers(frame, corners, ids);
    cv::Mat rvecs, tvecs;
    cv::aruco::estimatePoseSingleMarkers(corners, 2.6, _intrinsicMatrix, _distCoeffs, rvecs, tvecs);

    // Let's protect ourselves agains multiple markers
    if (rvecs.total() > 1)
        return;
    _markerFound = true;

    cv::Rodrigues(rvecs, _currentR);

    _currentT = tvecs;

    for (int row = 0; row < _currentR.rows; row++){
        for (int col = 0; col < _currentR.cols; col++){
            _currentExtrinsics.at<double>(row, col) = _currentR.at<double>(row, col);
        }
        _currentExtrinsics.at<double>(row, 3) = _currentT.at<double>(row);
    }
    _currentExtrinsics.at<double>(3,3) = 1;
    std::cout << tvecs << std::endl;

    // Convert coordinate systems of opencv to openGL (SceneKit)
    // Note that in openCV z goes away the camera (in openGL goes into the camera)
    // and y points down and on openGL point up
    // Another note: openCV has a column order matrix representation, while SceneKit
    // has a row order matrix, but we'll take care of it later.
    cv::Mat cvToGl = cv::Mat::zeros(4, 4, CV_64F);
    cvToGl.at<double>(0,0) = 1.0f;
    cvToGl.at<double>(1,1) = -1.0f; // invert the y axis
    cvToGl.at<double>(2,2) = -1.0f; // invert the z axis
    cvToGl.at<double>(3,3) = 1.0f;
    _currentExtrinsics = cvToGl * _currentExtrinsics;
    cv::aruco::drawAxis(frame, _intrinsicMatrix, _distCoeffs, rvecs, tvecs, 5);

然后在每个帧中我转换为SCN4Matrix的opencv矩阵:

- (SCNMatrix4) transformToSceneKit:(cv::Mat&) openCVTransformation{
SCNMatrix4 mat = SCNMatrix4Identity;
// Transpose
openCVTransformation = openCVTransformation.t();

// copy the rotationRows
mat.m11 = (float) openCVTransformation.at<double>(0, 0);
mat.m12 = (float) openCVTransformation.at<double>(0, 1);
mat.m13 = (float) openCVTransformation.at<double>(0, 2);
mat.m14 = (float) openCVTransformation.at<double>(0, 3);

mat.m21 = (float)openCVTransformation.at<double>(1, 0);
mat.m22 = (float)openCVTransformation.at<double>(1, 1);
mat.m23 = (float)openCVTransformation.at<double>(1, 2);
mat.m24 = (float)openCVTransformation.at<double>(1, 3);

mat.m31 = (float)openCVTransformation.at<double>(2, 0);
mat.m32 = (float)openCVTransformation.at<double>(2, 1);
mat.m33 = (float)openCVTransformation.at<double>(2, 2);
mat.m34 = (float)openCVTransformation.at<double>(2, 3);

//copy the translation row
mat.m41 = (float)openCVTransformation.at<double>(3, 0);
mat.m42 = (float)openCVTransformation.at<double>(3, 1)+2.5;
mat.m43 = (float)openCVTransformation.at<double>(3, 2);
mat.m44 = (float)openCVTransformation.at<double>(3, 3);

return mat;

}

在找到AR标记的每个帧中,我向场景添加一个框并将转换应用于对象节点:

SCNBox *box = [SCNBox boxWithWidth:5.0 height:5.0 length:5.0 chamferRadius:0.0];
_boxNode = [SCNNode nodeWithGeometry:box];
if (found){
    [self.delegate returnExtrinsicsMat:extrinsicMatrixOfTheMarker];
    Mat R, T;
    [self.delegate returnRotationMat:R];
    [self.delegate returnTranslationMat:T];
    SCNMatrix4 Transformation;
    Transformation = [self          transformToSceneKit:extrinsicMatrixOfTheMarker];
    //_cameraNode.transform = SCNMatrix4Invert(Transformation);
    [_sceneKitScene.rootNode addChildNode:_cameraNode];
    //_cameraNode.camera.projectionTransform = SCNMatrix4Identity;
    //_cameraNode.camera.zNear = 0.0;
    _sceneKitView.pointOfView = _cameraNode;
    _boxNode.transform = Transformation;


    [_sceneKitScene.rootNode addChildNode:_boxNode];
    //_boxNode.position = SCNVector3Make(Transformation.m41, Transformation.m42, Transformation.m43);

    std::cout << (_boxNode.position.x) << " " << (_boxNode.position.y) << " " << (_boxNode.position.z) << std::endl << std::endl;
}

例如,如果平移向量是(-1,5,20),则对象将出现在场景中位置(-1,-5,-20)的场景中,并且旋转也是正确的。问题是它永远不会出现在背景图像中的正确位置。我将添加一些图像来显示结果。

Result1

Result2

有谁知道为什么会这样?

1 个答案:

答案 0 :(得分:3)

找到解决方案。不是将变换应用于对象的节点,而是将反转变换矩阵应用于相机节点。然后对于相机透视变换矩阵,我应用了以下矩阵:

    projection = SCNMatrix4Identity
    projection.m11 = (2 * (float)(cameraMatrix[0])) / -(ImageWidth*0.5)
    projection.m12 = (-2 * (float)(cameraMatrix[1])) / (ImageWidth*0.5)
    projection.m13 = (width - (2 * Float(cameraMatrix[2]))) / (ImageWidth*0.5)
    projection.m22 = (2 * (float)(cameraMatrix[4])) / (ImageHeight*0.5)
    projection.m23 = (-height + (2 * (float)(cameraMatrix[5]))) / (ImageHeight*0.5)
    projection.m33 = (-far - near) / (far - near)
    projection.m34 = (-2 * far * near) / (far - near)
    projection.m43 = -1
    projection.m44 = 0

远离z剪裁平面。

我还必须更正框初始位置,使其在标记上居中。