如何使用相机参数计算像素之间以厘米为单位的世界距离?

时间:2018-09-09 17:13:56

标签: python opencv camera-calibration

我分别使用ROSOpenCvMatlab校准了相机。人们说我需要外部参数来计算图像中像素之间的实际距离(以厘米为单位)。 ROS没有明确提供外部参数,而是提供了(4,3)投影矩阵,该矩阵是内部参数和外部参数相乘的结果。

Ros camera.yaml file which includes camera parameters

这就是为什么我再次使用OpenCv和Matlab校准摄像机以获取外部参数的原因。尽管我搜索了如何计算像素之间的实际距离(以厘米为单位)(即从(x1,y1)到(x2,y2)),但我仍然不知道如何计算实际距离。而且,我不明白该使用哪些参数进行距离计算。我想使用OpenCv计算像素之间的距离,并将输出作为txt文件写入,以便可以使用此txt文件移动机器人。

例如,这是路径的像素输出样本数组,

array([[  4.484375  , 799.515625  ],
       [ 44.484375  , 487.        ],
       [255.296875  , 476.68261719],
       [267.99707031, 453.578125  ],
       [272.484375  , 306.        ],
       [403.484375  , 300.515625  ],
       [539.484375  , 296.515625  ],
       [589.421875  , 270.00292969],
       [801.109375  , 275.18554688],
       [819.        , 467.515625  ]])

我想按顺序找到这些像素之间的实际距离,以厘米为单位。

用于计算参数的OpenCv代码:

import numpy as np
import cv2
import glob

# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

cbrow = 6
cbcol = 9

# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((cbrow*cbcol,3), np.float32)
objp[:,:2] = np.mgrid[0:cbcol,0:cbrow].T.reshape(-1,2)

# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.

images = glob.glob('C:\Users\Ender\Desktop\CalibrationPhotos\*.jpg')

for fname in images:
    img = cv2.imread(fname)

    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

    # Find the chess board corners
    ret, corners = cv2.findChessboardCorners(gray, (cbcol,cbrow),None)

    # If found, add object points, image points (after refining them)
    if ret == True:
        print "%s: success" % fname
        objpoints.append(objp)

        cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
        imgpoints.append(corners)

        # Draw and display the corners
        cv2.drawChessboardCorners(img, (cbcol,cbrow), corners,ret)
        cv2.imshow('img',img)
        cv2.waitKey(150)

    else:
        print "%s: failed" % fname
        cv2.imshow('img',img)
        cv2.waitKey(1)

cv2.destroyAllWindows()

ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None)

print "mtx"
print mtx
print "dist"
print dist
#print "rvecs"
#print rvecs
#print "tvecs"
#print tvecs
np.savez("CalibData.npz" ,mtx=mtx, dist=dist, rvecs=rvecs, tvecs=tvecs)

#UNDISTROTION
img = cv2.imread('C:\Users\Ender\Desktop\Maze\Maze Images\Partial Maze Images-Raw\Raw7.jpg')
h,  w = img.shape[:2]
newcameramtx, roi=cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),1,(w,h))

dst = cv2.undistort(img, mtx, dist, None, newcameramtx)

# crop the image
x,y,w,h = roi
dst = dst[y:y+h, x:x+w]
cv2.imwrite('calibresult.jpg',dst)

#Re-projection Errors
total_error = 0
for i in xrange(len(objpoints)):
    imgpoints2, _ = cv2.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist)
    error = cv2.norm(imgpoints[i],imgpoints2, cv2.NORM_L2)/len(imgpoints2)
    total_error += error

print "total error: ", total_error/len(objpoints)

这样做的方式是什么?

0 个答案:

没有答案