如何通过分解单应矩阵获得更准确的旋转

时间:2019-02-22 17:17:12

标签: python opencv image-processing homography rotational-matrices

我正在尝试创建一个程序,该程序可以使用opencv从python中的两个图像计算平面的旋转。为此,我找到了表示翻译的单应性矩阵,然后使用openCv中的decomposeHomographyMat函数使用内在相机矩阵将其分解。

我使用混合器测试了准确性,方法是创建一个带有QR码的平面,然后将其旋转已知值as seen here where the plane has been rotated by 15,30,15 in XYZ Euler coordinates,尽管我希望最终程序为现实中正在转换的平面拍照

使用this technique在Blender中找到了固有的相机矩阵。而且还发现了在搅拌机中使用摄像机校准功能,方法是放置一个棋盘格并从多个角度和平移位置获取渲染。

但是,当我运行代码时,我得到的ZYX Euler输出为[27.9 ,-25.4,-26.31],而不是[15,-30,-15],这是不准确的。下面是将代码输出到期望值的其他一些示例,以使您了解代码的准确性:

预期-[0 -30 0]

被歪曲-[0.82 -34.51 -1.91]

预期-[0 0 15]

计算得出-[0 0 -15.02]

预期-[15 0 15]

计算得出-[16.23 3.76 -13.76]

我想知道是否有任何方法可以提高计算的旋转矩阵的精度,或者这是否是我可以获得的最佳精度,以及如果这是我可以获得的其他最佳精度,我可以做些什么通过图像计算平面在3轴上的旋转(也可以添加额外的相机)。

任何帮助将不胜感激!

我正在使用的代码如下所示:

#Import modules
import cv2
import numpy as np
from matplotlib import pyplot as plt
import glob
import math
########################################################################
#Import pictures
img1 = cv2.imread("top.png", cv2.IMREAD_GRAYSCALE)
img2 = cv2.imread("150015.png", cv2.IMREAD_GRAYSCALE)

#Feature Extraction
MIN_MATCH_COUNT = 10
sift = cv2.xfeatures2d.SIFT_create()

kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)

FLANN_INDEX_KDTREE = 0

index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks = 50)

flann = cv2.FlannBasedMatcher(index_params, search_params)

matches = flann.knnMatch(des1,des2,k=2)

# store all the good matches as per Lowe's ratio test.
good = []
for m,n in matches:
    if m.distance < 0.80*n.distance:
        good.append(m)

if len(good)>MIN_MATCH_COUNT:
    src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ]).reshape(-1,1,2)
    dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ]).reshape(-1,1,2)

    #Finds homography matrix
    M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,1)
    matchesMask = mask.ravel().tolist()

    h,w = img1.shape
    pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
    dst = cv2.perspectiveTransform(pts,M)

    img2 = cv2.polylines(img2,[np.int32(dst)],True,255,3, cv2.LINE_AA)

else:
    print "Not enough matches are found - %d/%d" % (len(good),MIN_MATCH_COUNT)
    matchesMask = None

draw_params = dict(matchColor = (0,255,0), # draw matches in green color
                   singlePointColor = None,
                   matchesMask = matchesMask, # draw only inliers
                   flags = 2)

img3 = cv2.drawMatches(img1,kp1,img2,kp2,good,None,**draw_params)

plt.imshow(img3, 'gray'),plt.show()


#Camera calibration matrix
K = ((3,3))
K = np.zeros(K)

#Camera calibration matrix from blender python script
#K = np.matrix('1181.2500 0 540; 0 2100 540; 0 0 1')

#Camera calibration matrix from importing checkboard into blender
K = np.matrix('1307.68697 0 600.618354; 0 1309.66779 605.481488; 0 0 1')

#Homography matrix is decomposed
num, Rs, Ts, Ns  = cv2.decomposeHomographyMat(M, K)

# Checks if a matrix is a valid rotation matrix.
def isRotationMatrix(R) :
    Rt = np.transpose(R)
    shouldBeIdentity = np.dot(Rt, R)
    I = np.identity(3, dtype = R.dtype)
    n = np.linalg.norm(I - shouldBeIdentity)
    return n < 1e-6


# Calculates rotation matrix to euler angles
# The result is the same as MATLAB except the order
# of the euler angles ( x and z are swapped ).
def rotationMatrixToEulerAngles(R) :

    assert(isRotationMatrix(R))

    sy = math.sqrt(R[0,0] * R[0,0] +  R[1,0] * R[1,0])

    singular = sy < 1e-6

    if  not singular :
        x = math.atan2(R[2,1] , R[2,2])
        y = math.atan2(-R[2,0], sy)
        z = math.atan2(R[1,0], R[0,0])
    else :
        x = math.atan2(-R[1,2], R[1,1])
        y = math.atan2(-R[2,0], sy)
        z = 0

    return np.array([x, y, z])

#Conver the 4 rotation matrix solutions into XYZ Euler angles
i=0
for i in range(0,4):
    R = Rs[i]
    angles = rotationMatrixToEulerAngles(R)
    x = np.degrees(angles[0])
    y = np.degrees(angles[1])
    z = np.degrees(angles[2])
    anglesDeg = np.array([x,y,z])
    print(anglesDeg)

我从Blender生成的图像如下:

top.png(Ox,0y,0z)

003000.png(0x,30y,0z)

150015.png(15x,0y,15z)

153000.png(15x,30y,0z)

153015.png(15x,30y,15z)

And here is an image with keypoints matching for the 153015.png comparison

0 个答案:

没有答案