我正在尝试使用OpenCV库中的findHomography函数解决plane2plane投影问题。作为玩具示例,我在R2,P中有一组点,在R2,Q中有第二组点,其中Qx = Px + 50,Qy = Py。意思是我将x坐标偏移了50.现在我运行以下代码:
Mat projectionMatrix = findHomography(Q, P);
vector<Point2f> projectedPoints(objectCoordinates.size());
perspectiveTransform(Q, projectedPoints, projectionMatrix);
这给了我P,这很棒。但是,现在我想要旋转和平移矩阵,R&amp; T,这是我感到困惑的地方。 OpenCV 3有一个名为decomposeHomographyMat
的函数,它为R和T返回最多4个解法(也返回法线但我不存储它们)。我这样运行:
vector<Mat> Rs;
vector<Mat> Ts;
decomposeHomographyMat(projectionMatrix, cameraMatrix, Rs, Ts, noArray());
我使用的cameraMatrix
是在之前的实验中尝试和测试过的。好的,所以我得到了我的四个结果。看着它们,我注意到我得到了单一矩阵作为所有R的结果,这很棒。然而,所有的平移向量都是[0,0,0] T,而我预计它们中至少有一个是[-50,0,0] T.我需要对decomposeHomographyMat
的结果做些什么来获得预期的行为吗?
由于
答案 0 :(得分:10)
事实证明我在某些方面错了,所以我决定改写这个答案。
简而言之 - 由于不正确的内在参数矩阵,您得到了奇怪的结果。
使用文章“Malis,E和Vargas,M”中的术语,“深入理解基于视觉的控制的单应性分解”(OpenCV中的单应性分解基于此),透视变换用< strong> H ,称为 Euclidean单应矩阵,其归一化结果 G = K ^ - 1 * H * K (其中 K 是相机的校准矩阵)称为单应矩阵
cv::findHomography()
和cv::decomposeHomographyMat()
都可以使用 Euclidean单应矩阵。但是为了将其分解为平移和旋转,cv::decomposeHomographyMat()
规范化欧几里德单应矩阵以获得单应矩阵。它依赖于用户提供的 K ,以执行此规范化。
至于 K 的估计,我认为这超出了这个问题的范围。此问题称为Camera auto-calibration,这是此Wiki文章的相关引用:
因此,三个视图是在视图之间具有固定内部参数的完全校准所需的最小视图。质量现代成像传感器和光学器件还可以提供对校准的进一步的先验约束,例如零偏斜(正交像素网格)和单位纵横比(正方形像素)。集成这些先验将减少两个所需图像的最小数量。
看起来,您可以在零偏斜和方形像素假设下从相同相机的2帧中提取 K 。但是,我不熟悉这个主题,所以不能给你更多的建议。
因此,为了检查我的解释是否正确,我做了一个小例子,在2个虚拟摄像机上投影3D平面上的某些点,找到单应性,分解它,并允许将此分解与地面真实旋转进行比较和翻译向量。这比实际输入更好,因为这样我们就可以精确地知道 K ,并且可以将估计中的误差与 R 和 t 中的误差分离。 。对于输入我已经检查过它能够正确地估计旋转和平移向量,尽管由于某种原因,平移总是小于地面实例10倍。也许这种分解仅限于一个尺度(我现在不确定),但有趣的是,它与具有固定系数的地面实况值有关。
这是来源:
#include <opencv2/opencv.hpp>
#include <iostream>
#include <vector>
int main() {
// set up a virtual camera
float f = 100, w = 640, h = 480;
cv::Mat1f K = (cv::Mat1f(3, 3) <<
f, 0, w/2,
0, f, h/2,
0, 0, 1);
// set transformation from 1st to 2nd camera (assume K is unchanged)
cv::Mat1f rvecDeg = (cv::Mat1f(3, 1) << 45, 12, 66);
cv::Mat1f t = (cv::Mat1f(3, 1) << 100, 200, 300);
std::cout << "-------------------------------------------\n";
std::cout << "Ground truth:\n";
std::cout << "K = \n" << K << std::endl << std::endl;
std::cout << "rvec = \n" << rvecDeg << std::endl << std::endl;
std::cout << "t = \n" << t << std::endl << std::endl;
// set up points on a plane
std::vector<cv::Point3f> p3d{{0, 0, 10},
{100, 0, 10},
{0, 100, 10},
{100, 100, 10}};
// project on both cameras
std::vector<cv::Point2f> Q, P, S;
cv::projectPoints(p3d,
cv::Mat1d::zeros(3, 1),
cv::Mat1d::zeros(3, 1),
K,
cv::Mat(),
Q);
cv::projectPoints(p3d,
rvecDeg*CV_PI/180,
t,
K,
cv::Mat(),
P);
// find homography
cv::Mat H = cv::findHomography(Q, P);
std::cout << "-------------------------------------------\n";
std::cout << "Estimated H = \n" << H << std::endl << std::endl;
// check by reprojection
std::vector<cv::Point2f> P_(P.size());
cv::perspectiveTransform(Q, P_, H);
float sumError = 0;
for (size_t i = 0; i < P.size(); i++) {
sumError += cv::norm(P[i] - P_[i]);
}
std::cout << "-------------------------------------------\n";
std::cout << "Average reprojection error = "
<< sumError/P.size() << std::endl << std::endl;
// decompose using identity as internal parameters matrix
std::vector<cv::Mat> Rs, Ts;
cv::decomposeHomographyMat(H,
K,
Rs, Ts,
cv::noArray());
std::cout << "-------------------------------------------\n";
std::cout << "Estimated decomposition:\n\n";
std::cout << "rvec = " << std::endl;
for (auto R_ : Rs) {
cv::Mat1d rvec;
cv::Rodrigues(R_, rvec);
std::cout << rvec*180/CV_PI << std::endl << std::endl;
}
std::cout << std::endl;
std::cout << "t = " << std::endl;
for (auto t_ : Ts) {
std::cout << t_ << std::endl << std::endl;
}
return 0;
}
这是我机器上的输出:
-------------------------------------------
Ground truth:
K =
[100, 0, 320;
0, 100, 240;
0, 0, 1]
rvec =
[45;
12;
66]
t =
[100;
200;
300]
-------------------------------------------
Estimated H =
[0.04136041220427821, 0.04748763375951008, 358.5557917287962;
0.05074854454707714, 0.06137211243830468, 297.4585754092336;
8.294458769850147e-05, 0.0002294875562580223, 1]
-------------------------------------------
Average reprojection error = 0
-------------------------------------------
Estimated decomposition:
rvec =
[-73.21470385654712;
56.64668212487194;
82.09114210289061]
[-73.21470385654712;
56.64668212487194;
82.09114210289061]
[45.00005330430893;
12.00000697952995;
65.99998380038915]
[45.00005330430893;
12.00000697952995;
65.99998380038915]
t =
[10.76993852870029;
18.60689642878277;
30.62344129378435]
[-10.76993852870029;
-18.60689642878277;
-30.62344129378435]
[10.00001378255982;
20.00002581449634;
30.0000336510648]
[-10.00001378255982;
-20.00002581449634;
-30.0000336510648]
正如您所看到的,假设中存在正确的旋转向量估计,并且存在一种按比例正确的正确估计。
答案 1 :(得分:0)
您应该在每个平面上使用solvePnP
,然后从两个相机矩阵获得相对平移和旋转(假设您至少有4个点)。
decomposeHomographyMat
的问题在于,您最多可以获得4个解决方案...您可以删除其中两个将落在图像FOV之外的图像,但这仍然是一个问题。
z=10
)。下面添加的是正确重新植入了上面的代码(已修复并在python中):import cv2
import numpy as np
# set up a virtual camera
f = 100
w = 640
h = 480
K = np.array([[f, 0, w/2],
[0, f, h/2],
[0, 0, 1]])
dist_coffs = np.zeros(5)
# set transformation from 1st to 2nd camera (assume K is unchanged)
rvecDeg = np.array([[45, 12, 66]]).T
t = np.array([[100.0, 200, 300]]).T
print("-------------------------------------------\n")
print("Ground truth:\n")
print("K = \n" + str(K))
print("rvec = \n" + str(rvecDeg))
print("t = \n" + str(t))
# set up points on a plane
p3d = np.array([[0, 0, 1],
[100, 0, 1],
[0, 100, 1],
[100, 100, 1]], dtype=np.float64)
# project on both cameras
Q, _ = cv2.projectPoints(p3d,
np.zeros((3, 1)),
np.zeros((3, 1)),
K,
dist_coffs)
P, _ = cv2.projectPoints(p3d,
rvecDeg*np.pi/180,
t,
K,
dist_coffs)
# find homography
H, _ = cv2.findHomography(Q, P)
print("-------------------------------------------\n")
print("Estimated H = \n" + str(H))
# check by reprojection
P_ = cv2.perspectiveTransform(Q, H)
sumError = 0
for i in range(P.shape[0]):
sumError += np.linalg.norm(P[i] - P_[i])
print("-------------------------------------------\n")
print("Average reprojection error = "+str(sumError/P.shape[0]))
# decompose using identity as internal parameters matrix
num_res, Rs, ts, n = cv2.decomposeHomographyMat(H, K)
print("-------------------------------------------\n")
print("Estimated decomposition:\n\n")
for i, Rt in enumerate(zip(Rs, ts)):
R, t = Rt
print("option " + str(i+1))
print("rvec = ")
rvec, _ = cv2.Rodrigues(R)
print(rvec*180/np.pi)
print("t = ")
print(t)
这就是结果(选项3是正确的结果):
-------------------------------------------
Ground truth:
K =
[[100. 0. 320.]
[ 0. 100. 240.]
[ 0. 0. 1.]]
rvec =
[[45]
[12]
[66]]
t =
[[100.]
[200.]
[300.]]
-------------------------------------------
Estimated H =
[[3.93678513e-03 4.51998690e-03 3.53830416e+02]
[4.83037187e-03 5.84154353e-03 3.05790229e+02]
[7.89487379e-06 2.18431675e-05 1.00000000e+00]]
-------------------------------------------
Average reprojection error = 1.1324020050061362e-05
-------------------------------------------
Estimated decomposition:
option 1
rvec =
[[-78.28555877]
[ 58.01301837]
[ 81.41634175]]
t =
[[100.79816988]
[198.59277542]
[300.6675498 ]]
option 2
rvec =
[[-78.28555877]
[ 58.01301837]
[ 81.41634175]]
t =
[[-100.79816988]
[-198.59277542]
[-300.6675498 ]]
option 3
rvec =
[[45.0000205 ]
[12.00005488]
[65.99999505]]
t =
[[100.00010826]
[200.00026791]
[300.00034698]]
option 4
rvec =
[[45.0000205 ]
[12.00005488]
[65.99999505]]
t =
[[-100.00010826]
[-200.00026791]
[-300.00034698]]