I'm using "OpenCV for Unity3d" asset (it's the same OpenCV package for Java but translated to C# for Unity3d) in order to create an Augmented Reality application for my MSc Thesis (Computer Science).
So far, I'm able to detect an object from video frames using ORB feature detector and also I can find the 3D-to-2D relation using OpenCV's SolvePnP method (I did the camera calibration as well). From that method I'm getting the Translation and Rotation vectors. The problem occurs at the augmentation stage where I have to show a 3d object as a virtual object and update its position and rotation at each frame. OpenCV returns Rodrigues Rotation matrix, but Unity3d works with Quaternion rotation so I'm updating object's position and rotation wrong and I can't figure it out how to implement the conversion forumla (from Rodrigues to Quaternion).
Getting the rvec and tvec:
Mat rvec = new Mat();
Mat tvec = new Mat();
Mat rotationMatrix = new Mat ();
Calib3d.solvePnP (object_world_corners, scene_flat_corners, CalibrationMatrix, DistortionCoefficientsMatrix, rvec, tvec);
Calib3d.Rodrigues (rvec, rotationMatrix);
Updating the position of the virtual object:
Vector3 objPosition = new Vector3 ();
objPosition.x = (model.transform.position.x + (float)tvec.get (0, 0)[0]);
objPosition.y = (model.transform.position.y + (float)tvec.get (1, 0)[0]);
objPosition.z = (model.transform.position.z - (float)tvec.get (2, 0)[0]);
model.transform.position = objPosition;
I have a minus sign for the Z axis because when you convert OpenCV's to Unty3d's system coordinate you must invert the Z axis (I checked the system coordinates by myself).
Unity3d's Coordinate System (Green is Y, Red is X and Blue is Z) :
OpenCV's Coordinate System:
In addition I did the same thing for the rotation matrix and I updated the virtual object's rotation.
p.s I found a similar question but the guy who asked for it he did not post clearly the solution.
Thanks!
答案 0 :(得分:5)
在cv :: solvePnP之后你有旋转3x3矩阵。该矩阵,因为它是一个旋转,是正交和规范化。因此,该矩阵的列按从左到右的顺序排列:
OpenCV使用右手坐标系。沿着光轴看相机,X轴向右,Y轴向下,Z轴向前。
向前传递矢量F =(fx,fy,fz)和向上矢量U =(ux,uy,uz)到Unity。这些分别是第三和第二列。无需规范化;他们已经正常化了。
在Unity中,你可以像这样构建四元数:
// STEP 1 : fetch position from OpenCV + basic transformation
Vector3 pos; // from OpenCV
pos = new Vector3(pos.x, -pos.y, pos.z); // right-handed coordinates system (OpenCV) to left-handed one (Unity)
// STEP 2 : set virtual camera's frustrum (Unity) to match physical camera's parameters
Vector2 fparams; // from OpenCV (calibration parameters Fx and Fy = focal lengths in pixels)
Vector2 resolution; // image resolution from OpenCV
float vfov = 2.0f * Mathf.Atan(0.5f * resolution.y / fparams.y) * Mathf.Rad2Deg; // virtual camera (pinhole type) vertical field of view
Camera cam; // TODO get reference one way or another
cam.fieldOfView = vfov;
cam.aspect = resolution.x / resolution.y; // you could set a viewport rect with proper aspect as well... I would prefer the viewport approach
// STEP 3 : shift position to compensate for physical camera's optical axis not going exactly through image center
Vector2 cparams; // from OpenCV (calibration parameters Cx and Cy = optical center shifts from image center in pixels)
Vector3 imageCenter = new Vector3(0.5f, 0.5f, pos.z); // in viewport coordinates
Vector3 opticalCenter = new Vector3(0.5f + cparams.x / resolution.x, 0.5f + cparams.y / resolution.y, pos.z); // in viewport coordinates
pos += cam.ViewportToWorldPoint(imageCenter) - cam.ViewportToWorldPoint(opticalCenter); // position is set as if physical camera's optical axis went exactly through image center
这就是它。希望这有帮助!
为位置相关评论编辑
注意:OpenCV中的Z轴位于摄像机的光轴上,它通过中心附近的图像,但一般不在中心。在您的校准参数中,有Cx和Cy参数。这些组合是图像空间中从中心到Z轴穿过图像的位置的2D偏移。必须考虑这种转变才能在2D背景上精确地绘制3D内容。
要在Unity中获得正确的定位:
var intersects = raycaster.intersectObjects(YOUR_OBJECT.children, true);
您将从物理相机检索到的图像放在虚拟相机前面的中心位于其前轴上(按比例缩放以适应截头体),然后在2D背景上绘制正确的3D位置!