我使用以下代码进行地理配准图像
带输入
grid = "for example a utm grid"
img_raw = cv2.imread(filename)
mtx, dist = "intrinsic camera matrix and
distortion coefficient from calibration matrix"
src_pts = "camera location of gcp on undistorted image"
dst_pts = "world location of gcp in the grid coordinate"
我纠正相机失真并应用单应性
img = cv2.undistort(img_raw, mtx, dist, None, None)
H, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
img_geo = cv2.warpPerspective(img,(grid.shape[0],grid.shape[1]),\
flags=cv2.INTER_NEAREST,borderValue=0)
然后我想得到相机的位置。我尝试使用cv2.solvePnP中计算的旋转和平移矩阵,如图所示here。如果我是对的,我需要至少一组4个共面点的摄像机和世界坐标。
flag, rvec, tvec = cv2.solvePnP(world, cam, mtx, dist)
如果我是对的,在solvePnP中,相机坐标需要来自原始图像帧而不是src_pts中的未失真帧。
所以我的问题是,如何在原始图像帧中获取src_pts的像素位置?或者还有其他方法可以获得rvec和tvec吗?
答案 0 :(得分:1)
这是我找到的解决方案
grid = "for example a utm grid"
img_raw = cv2.imread(filename)
mtx, dist = "intrinsic camera matrix and
distortion coefficient from calibration matrix"
src_pts = "camera location of gcp on raw image"
dst_pts = "world location of gcp in the grid coordinate"
注意src_pts
现在是原始失真图像中的点
src_pts_undistorted = cv2.undistortPoints(src_pts, K, D, P=K)
img = cv2.undistort(img_raw, mtx, dist, None, None)
H, mask = cv2.findHomography(src_pts_undistorted, dst_pts, cv2.RANSAC,5.0)
img_geo = cv2.warpPerspective(img,(grid.shape[0],grid.shape[1]),\
flags=cv2.INTER_NEAREST,borderValue=0)
然后我可以从solvePnP获得姿势
flag, rvec, tvec = cv2.solvePnP(dst_pts, src_pts, mtx, dist)
答案 1 :(得分:0)
也许函数projectPoints
就是您所需要的。这里有链接:http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#projectpoints