我正在尝试从平面标记的多个图像执行3D重建(来自运动的结构)。我是MVG和openCV的新手。
据我所知,我必须执行以下步骤:
我希望通过&#34获得(n-1)角点的3D点;第一个视图作为原点"我们可以使用捆绑调整进行优化。
但我得到的结果非常令人失望,计算的3D点数被一个巨大因素所取代。
这些是问题:
1.我所遵循的步骤有问题吗?
2.我应该使用cv :: findHomography()和cv :: decomposeHomographyMat()来查找 相机的相对运动?
3.应该在cv :: triangulatePoints(P1,P2,point1,point2,OutMat)中指向point1和point2 被规范化和不失真?如果是的话," Outmat"被解释?
请任何对此主题有深刻见解的人,你能指出我的错误吗?
P.S。阅读"计算机视觉中的多视图几何"
后,我已经达到了上述的理解请在下面找到代码段:
cv::Mat Reconstruction::Triangulate(std::vector<cv::Point2f>
ImagePointsFirstView, std::vector<cv::Point2f>ImagePointsSecondView)
{
cv::Mat rVectFirstView, tVecFristView;
cv::Mat rVectSecondView, tVecSecondView;
cv::Mat RotMatFirstView = cv::Mat(3, 3, CV_64F);
cv::Mat RotMatSecondView = cv::Mat(3, 3, CV_64F);
cv::solvePnP(RealWorldPoints, ImagePointsFirstView, cameraMatrix, distortionMatrix, rVectFirstView, tVecFristView);
cv::solvePnP(RealWorldPoints, ImagePointsSecondView, cameraMatrix, distortionMatrix, rVectSecondView, tVecSecondView);
cv::Rodrigues(rVectFirstView, RotMatFirstView);
cv::Rodrigues(rVectSecondView, RotMatSecondView);
cv::Mat RelativeRot = RotMatFirstView-RotMatSecondView ;
cv::Mat RelativeTrans = tVecFristView-tVecSecondView ;
cv::Mat RelativePose;
cv::hconcat(RelativeRot, RelativeTrans, RelativePose);
cv::Mat ProjectionMatrix_0 = cameraMatrix*cv::Mat::eye(3, 4, CV_64F);
cv::Mat ProjectionMatrix_1 = cameraMatrix* RelativePose;
cv::Mat X;
cv::undistortPoints(ImagePointsFirstView, ImagePointsFirstView, cameraMatrix, distortionMatrix, cameraMatrix);
cv::undistortPoints(ImagePointsSecondView, ImagePointsSecondView, cameraMatrix, distortionMatrix, cameraMatrix);
cv::triangulatePoints(ProjectionMatrix_0, ProjectionMatrix_1, ImagePointsFirstView, ImagePointsSecondView, X);
X.row(0) = X.row(0) / X.row(3);
X.row(1) = X.row(1) / X.row(3);
X.row(2) = X.row(2) / X.row(3);
return X;
}