I am trying to use the essential matrix method in opencv to obtain the R and t of one camera pose with respect to another. The procedure I am following is:
For a simple check, I tested this with an image pair but of the same image twice (so that neither the camera nor the image points have moved), hence the translation vector should be null and the rotation should be identity. But the output of the program ends up being wrong.
The fundamental matrix is
[[ 3.59955121e-17 -5.77350269e-01 2.88675135e-01]
[ 5.77350269e-01 5.55111512e-17 2.88675135e-01]
[ -2.88675135e-01 -2.88675135e-01 0.00000000e+00]]
Fundamental matrix error check: 0.000000
The essential matrix is
[[ 4.51463713e-10 -7.25229650e+06 -2.37367600e+06]
[ 7.25229650e+06 6.98357978e-10 4.27847619e+06]
[ 2.37367600e+06 -4.27847619e+06 -1.33013600e-10]]
Translation matrix is
[-0.48905495 -0.2713251 0.82898007]
Rotation matrix is
[[ 0.52165052 -0.26538577 0.8108336 ]
[-0.26538577 0.85276538 0.4498462 ]
[ 0.8108336 0.4498462 -0.3744159 ]]
Roll: -26.965168, Pitch: 129.775110, Yaw: -54.179055
I also used this code with a pair of cameras, displaced by a certain distance in X: but the Euler angles and translation I obtain using this technique (there, I consider two camera matrices instead of one) are still wrong. The translation vector tells me that I've moved in both X and Z, and the rotation matrix is not accurate. I am confused as to what might be going wrong here. Any suggestions would be very helpful. Thank you!
EDIT: My code can be viewed here
答案 0 :(得分:0)
我认为在继续匹配功能之前,您需要使用相机矩阵和失真系数对图像进行非扭曲。我知道这太晚了,但我希望这有助于其他人。