Python 2.7 / OpenCV 3.0:使用cv2.triangulatePoints计算对象的3D点的主要错误

时间:2017-12-20 13:03:56

标签: python-2.7 opencv 3d triangulation

我使用OpenCV 3.0在Python 2.7中编写代码来查找对象的3D点。我创建了三个python脚本( traingulate,centroid_cal和rectify_img ),其中两个是函数脚本( centroid_cal和rectify_img,具有函数质心和纠正)。我遵循的顺序:

  • 校准立体相机并保存cameraMatrixL,distCoeffsL,cameraMatrixR,distCoeffsR,R,T其中R和T是旋转和平移矩阵。两个相机的校准误差小于 0.03像素
  • 使用校准过的相机拍摄左右图像,并使用 cv2.stereoRectify 来校正图像。我使用 cv2.stereoRectify 获得了 R1,R2,P1,P2,Q,validPixROI1,validPixROI2 R1,R2,P1,P2,Q,validPixROI1, validPixROI2 = cv2.stereoRectify(cameraMatrixL,distCoeffsL,cameraMatrixR,distCoeffsR,(640,480),R,T,alpha=0)
  • 然后我不再表示图像: mapxL, mapyL = cv2.initUndistortRectifyMap(cameraMatrixL, distCoeffsL, R1, P1, (640,480), cv2.CV_32FC1) mapxR, mapyR = cv2.initUndistortRectifyMap(cameraMatrixR, distCoeffsR, R2, P2, (640,480), cv2.CV_32FC1) left_image = cv2.remap(left_image_undist, mapxL, mapyL,cv2.INTER_LINEAR) right_image = cv2.remap(right_image_undist, mapxR, mapyR,cv2.INTER_LINEAR)
  • 使用 centroid_cal 脚本找出左右图像中的对应点。
  • 使用 cv2.triangulatePoints 查找3D点。

每个脚本的完整代码如下:

三角测量脚本的代码

import numpy as np
import cv2
from rectify_img import rectify
from centroid_cal import centroid_cal

left_image_undist = cv2.imread('left150.png')
right_image_undist = cv2.imread('right150.png')
left_image,right_image,P1,P2 = rectify(left_image_undist,right_image_undist)
left_points = centroid_cal(left_image)
right_points = centroid_cal(right_image)
points = cv2.triangulatePoints(P1,P2,left_points,right_points)
points /= points[3]
print points

rectify_img

的代码
def rectify(left_image_undist,right_image_undist):
    import numpy as np
    import cv2
    cameraMatrixL = np.load('mtx_Left.npy')
    distCoeffsL = np.load('dist_Left.npy')
    cameraMatrixR = np.load('mtx_Right.npy')
    distCoeffsR = np.load('dist_Right.npy')
    R = np.load('R.npy')
    T = np.load('T.npy')
    R1,R2,P1,P2,Q,validPixROI1, validPixROI2 = cv2.stereoRectify(cameraMatrixL,distCoeffsL,cameraMatrixR,distCoeffsR,(640,480),R,T,alpha=0)
    #computes undistort and rectify maps
    mapxL, mapyL = cv2.initUndistortRectifyMap(cameraMatrixL, distCoeffsL, R1, P1, (640,480), cv2.CV_32FC1)
    mapxR, mapyR = cv2.initUndistortRectifyMap(cameraMatrixR, distCoeffsR, R2, P2, (640,480), cv2.CV_32FC1)
    left_image = cv2.remap(left_image_undist, mapxL, mapyL,cv2.INTER_LINEAR)
    right_image = cv2.remap(right_image_undist, mapxR, mapyR,cv2.INTER_LINEAR)
    return left_image,right_image,P1,P2

代码 centroid_cal 我检测到白色):

def centroid_cal(image):
    lower_white = np.array([100,100,100], dtype=np.uint8)
    upper_white = np.array([255,255,255], dtype=np.uint8)
    bin_img_pts = cv2.inRange(image, lower_white, upper_white)
    #cv2.imshow('res',binimg)
    bin_img_pts[bin_img_pts!=0] = 255
    # flood fill background to find inner holes
    holes_in_pts = bin_img_pts.copy()
    retval, image, mask, rect = cv2.floodFill(holes_in_pts, None, (0, 0), 255)
    # invert holes mask, bitwise or with img fill in holes
    holes_in_pts_inv = cv2.bitwise_not(holes_in_pts)
    #cv2.imshow('holes',holes)
    filled_holes_pts = cv2.bitwise_or(bin_img_pts, holes_in_pts_inv)
    #cv2.imshow('filled holes', filled_holes_pts)
    _pts_img_label = morphology.label(filled_holes_pts)
    #cv2.imshow('label',_pts_img_label)
    cleaned_pts_img = morphology.remove_small_objects(_pts_img_label, min_size=1264, connectivity=4)
    #cv2.imshow('clea',cleaned_pts_img)
    img_unlabel_pts = np.zeros((_pts_img_label.shape))
    img_unlabel_pts[cleaned_pts_img > 0] = 255
    img_unlabel_pts = np.uint8(img_unlabel_pts)
    #### here conversion of array into uint8 data conversion is important
    ### else cv2.connectedComponentsWithStats will show error.
    nb_components, output, stats_pts, centroids_pts = cv2.connectedComponentsWithStats(img_unlabel_pts, connectivity=4)
    #cv2.imshow("centroid", img3)
    centroids_pts = centroids_pts[np.where(stats_pts[:, -1] > 1000)]
    centroids_pts = centroids_pts[1:nb_components]
    req_points = centroids_pts[1,:]
    #print(_pts_points)
    return req_points

我得到的答案是(mm单位):

    [[ -74.75449128]
 [ -32.9271306 ]
 [ 320.21282459]
 [   1.0        ]]

虽然应该差不多

    [[ -55.00 ]
     [ -50.00]
     [ 340.00]
     [   1.0        ]]

我想知道如何纠正这些错误?我在哪里做错了?

我在这里发布的所有矩阵也可供参考。

cameraMatrixL = np.array(
    [[ 534.40241484,    0.        ,  298.51610503],
     [   0.        ,  527.62465955,  214.45395059],
     [   0.        ,    0.        ,    1.        ]])

distCoeffsL = np.array([[ 0.05192082, -0.0262804 , -0.00407178, -0.00618521, -0.22427776]])

cameraMatrixR = np.array(
    [[ 540.33748563,    0.        ,  304.35042046],
     [   0.        ,  534.67506784,  218.16718612],
     [   0.        ,    0.        ,    1.        ]])

distCoeffsR = np.array([[ 0.0386014 ,  0.34514765, -0.00434087, -0.00734639, -2.09991534]])

R = np.array(
    [[ 0.99959407, -0.02457877,  0.01440768],
     [ 0.02455863,  0.99969715,  0.00157338],
     [-0.01444199, -0.00121891,  0.99989497]])

T = np.array(
    [[-94.95904357],
     [ -0.849498  ],
     [  7.4674219 ]])

如果有人需要,我上传我的图片和所有矩阵。

链接到保存文件的Google云端硬盘。 LINK

我还要添加图片: enter image description here enter image description here

感谢。

0 个答案:

没有答案