姿态估计:solvePnP和极线几何不同意

时间:2015-08-08 21:18:33

标签: opencv computer-vision pose-estimation

我有一个相对的相机姿势估计问题,我正在看一个场景,其中不同方向的相机间隔一定距离。最初,我使用5点算法计算基本矩阵并将其分解以获得相机2 w.r.t相机1的R和t。

我认为通过将两组图像点三角测量成三维,然后在3D-2D对应关系上运行solvePnP进行检查是个好主意,但是我从solvePnP获得的结果是关闭的。我正在尝试这样做以“改进”我的姿势,因为比例可以从一帧改变到另一帧。无论如何,在一个案例中,我沿着Z轴在摄像机1和摄像机2之间进行了45度旋转,并且极线几何部分给了我这个答案:

Relative camera rotation is [1.46774, 4.28483, 40.4676]
Translation vector is [-0.778165583410928;  -0.6242059242696293;  -0.06946429947410336]
另一方面,

solvePnP ..

Camera1: rvecs [0.3830144497209735;   -0.5153903947692436;  -0.001401186630803216]
         tvecs [-1777.451836911453;  -1097.111339375749;  3807.545406775675]
Euler1 [24.0615, -28.7139, -6.32776]

Camera2: rvecs [1407374883553280; 1337006420426752; 774194163884064.1] (!!)
         tvecs[1.249151852575814;  -4.060149502748567;  -0.06899980661249146]
Euler2 [-122.805, -69.3934, 45.7056]

使用camera2和相机1的tvec有点令人不安。我的代码涉及点三角测量和solvePnP看起来像这样:

points1.convertTo(points1, CV_32F);
points2.convertTo(points2, CV_32F);

 // Homogenize image points

points1.col(0) = (points1.col(0) - pp.x) / focal;
points2.col(0) = (points2.col(0) - pp.x) / focal;
points1.col(1) = (points1.col(1) - pp.y) / focal;
points2.col(1) = (points2.col(1) - pp.y) / focal;

points1 = points1.t();      points2 = points2.t();

cv::triangulatePoints(P1, P2, points1, points2, points3DH);

cv::Mat points3D;
convertPointsFromHomogeneous(Mat(points3DH.t()).reshape(4, 1), points3D);

cv::solvePnP(points3D, points1.t(), K, noArray(), rvec1, tvec1, 1, CV_ITERATIVE );
cv::solvePnP(points3D, points2.t(), K, noArray(), rvec2, tvec2, 1, CV_ITERATIVE );

然后我将rvecs转换为罗德里格斯以获得欧拉角度:但是由于rvecs和tvecs本身似乎是错误的,我觉得我的过程出了问题。任何指针都会有所帮助。谢谢!

0 个答案:

没有答案