基本矩阵估计错误轴上的运动

时间:2019-06-03 10:04:35

标签: python opencv computer-vision

我试图估计单眼相机的姿势,但是当我使用restorePose功能时,我沿z轴移动的最多,而我只是水平移动相机。在我的应用中,我们需要恢复姿势,以便我们可以生成所记录场景的3D模型。

实际上,我们使用深度相机生成3D点,但是当我们尝试使用PnP计算平移时,会导致可怕的结果。由于基本矩阵仅提供一个单位位移,因此我们通过比较两对点之间的距离来计算比例因子。

为了累积运动,我们将每个连续相机姿态之间的均匀变换相乘,然后对于每一帧,我们将XYZ1向量乘以该均匀变换。

我们尝试将姿势估计算法更改为使用深度传感器嵌入到solvePnP函数中的一种。显然,功能匹配非常健壮,并且在我们计算30fps图像流的连续帧之间进行匹配时。

当乘以变换时,我认为由restorePose给定的矩阵应该被取反,但在任何情况下都不能给出良好的结果。

提取功能

def find_correspondence(old_ip,old_features, new_ip, new_features):
    # BFMatcher with default params
    #Flag per si estem perduts, actualment es salta la foto i pasa a la seguent pero probablement funcioni millor
    # si tornem enrere i agafem el u~ltim frame amb una bona transformada com a nova referencia
    bf=cv.BFMatcher(cv.NORM_HAMMING, crossCheck=True)
    matches = bf.match(old_features,new_features)    #debugat una mica aquesta part, feia el tonto amb alguns vectors
    matches=sorted(matches, key = lambda x:x.distance)
    old_ip_loc= [old_ip[mat.queryIdx].pt for mat in matches]
    new_ip_loc= [new_ip[mat.trainIdx].pt for mat in matches]
    old_ip_loc=np.round(old_ip_loc)# without round
    new_ip_loc=np.round(new_ip_loc)# without round
    return old_ip_loc.astype(int),new_ip_loc.astype(int)

计算基本矩阵

#find correspondence to the first image database with the last photo recorded
    points1,points2=find_correspondence(PointDatabase[counterframe-1],FeatureDatabase[counterframe-1],int_points, features)



if flag==False:# normal proces, the image is not required to be avoid
        #Esential Matrix
        E,mask=cv.findEssentialMat(points1, points2, focallength,PPCam, cv.RANSAC, 0.099, 1.0)
        #Get pose refered to the first photo database
        _, R, t, mask = cv.recoverPose(E,points1,points2,focal=focallength,pp=PPCam)
        #Save the pose matrix
        PoseNoScale=np.concatenate((R,t),1)

计算校正


for var in range (0,len(points1commun)-1):

    Pospoint1btw21 = cv.triangulatePoints(Poses3x4[counterframe-1],Poses3x4[counterframe-2], points2commun[var], points1commun[var])
    Pospoint2btw21 = cv.triangulatePoints(Poses3x4[counterframe-1],Poses3x4[counterframe-2], points2commun[var+1], points1commun[var+1])
    Pospoint1btw23 = cv.triangulatePoints(Poses3x4[counterframe-1], PoseNoScale, points2commun[var], points3commun[var])
    Pospoint2btw23 = cv.triangulatePoints(Poses3x4[counterframe-1], PoseNoScale, points2commun[var+1], points3commun[var+1])

                Pospoint1btw21X.append(Pospoint1btw21[0]/Pospoint1btw21[3])
                Pospoint2btw21X.append(Pospoint2btw21[0]/Pospoint2btw21[3])
                Pospoint1btw23X.append(Pospoint1btw23[0]/Pospoint1btw23[3])
                Pospoint2btw23X.append(Pospoint2btw23[0]/Pospoint2btw23[3])
                #4 check if the distance between two keypoints remain the same if its the case, the scale is 1 otherwise correct the scale

ComputeScale.append(abs(scale*(Pospoint1btw23X[var]-Pospoint2btw23X[var])/((Pospoint1btw21X[var]-Pospoint2btw21X[var]))))

#5 Reject outliers
        ComputeScaleClean=[]
    meanscale=np.mean(ComputeScale)

    for var2 in range (0,len(ComputeScale)):
        #the scale has to be low
        if ComputeScale[var2]<meanscale:
#the points has to be more or less similar so de difference of distance is *10 or /10
                if ComputeScale[var2]>=0.1 and ComputeScale[var2]<=10 :
                        ComputeScaleClean.append(ComputeScale[var2])

    if len(ComputeScaleClean)>=1:
        scale=np.median(ComputeScaleClean)
    else:
        # reset scale
        scale=scale

由于运动是水平的,因此我们期望一个轴有很大的变化,而Z轴的变化接近零,但实际上我们在x,z上都得到了很大的变化,这显然是错误的。

提前谢谢!

0 个答案:

没有答案