调用calibrateCamera后如何获得每单位长度的像素?

时间:2018-07-25 02:29:43

标签: opencv camera-calibration

我正在使用圆形网格校准相机。相机位于桌子上方的固定位置,因此我正在使用单个图像进行校准。 (我要使用的所有对象都将是平坦的,并且与校准图像在同一张桌子上。)我将圆心的实际位置放到objectPoints中,并将其传递到{{1 }}。

这是我的校准代码(基本上是从OpenCV calibrateCamera示例程序中精简下来,以处理单个图像)

calibration.cpp

调用int circlesPerRow = 56; int circlesPerColumn = 32; // The distance between circle centers is 4 cm double centerToCenterDistance = 0.04; Mat calibrationImage = imread(calibrationImageFileName, IMREAD_GRAYSCALE); vector<Point2f> detectedCenters; Size boardSize(circlesPerRow, circlesPerColumn); bool found = findCirclesGrid(calibrationImage, boardSize, detectedCenters); if (!found) { return ERR_INVALID_BOARD; } // Put the detected centers in the imagePoints vector vector<vector<Point2f> > imagePoints; imagePoints.push_back(detectedCenters); // Set the aspect ratio to 1 Mat cameraMatrix = Mat::eye(3, 3, CV_64F); double aspectRatio = 1.0; cameraMatrix.at<double>(0, 0) = 1.0; Size imageSize(calibrationImage.size()); vector<Mat> rvecs, tvecs; Mat distCoeffs = Mat::zeros(8, 1, CV_64F); // Create a vector of the centers in user units vector<vector<Point3f> > objectPoints(1); for (int i = 0; i < circlesPerColumn; i++) for (int j = 0; j < circlesPerRow; j++) objectPoints[0].push_back(Point3f(float(j*centerToCenterDistance), float(i*centerToCenterDistance), 0)); int flags = CALIB_FIX_ASPECT_RATIO | CALIB_FIX_K4 | CALIB_FIX_K5; calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs, flags); 之后,如何计算与未失真图像中的校准圆相同的平面上每米的像素数?

1 个答案:

答案 0 :(得分:1)

首先,您仅用1张图像进行校准...建议在不同位置使用几张图像以获得更准确的结果,因为您正在计算固有参数(如果只是相机姿势) ,即插即用就足够了。

calibrateCamera将为您提供将3D点投影到摄像机图像平面所需的内在参数(摄像机矩阵)。它还会将所需的外部参数提供给相机原点(给定的每个图像一个)。

完成此校准后,您可以创建一组点,例如:

cv::Vec3f a(0., 0., 0.), b(1., 0., 0.);

假设您在世界坐标单位中使用米,如果没有相应地乘以:)

现在您有2个选项,将针孔相机模型公式应用于这两个点的手动方法,将具有所需相机姿势的图像生成的图像作为外在方法(在您的情况下,只有一个) 。或者您可以使用以下项目点:

// your last line
cv::calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs, flags);
// prepare the points
std::vector<cv::Point3f> pointsToProject{cv::Vec3f{0., 0., 0.},cv::Vec3f{0., 1., 0.}};
std::vector<cv::Point2f> projectedPoints;
// invert the extrinsic matrix
cv::Mat rotMat;
cv::rodrigues(rvecs[0], rotMat);
cv::Mat transformation = cv::Mat::eye(4,4,CV_32F);
rotMat.setTo(transformation(cv::Rect(0,0,3,3)));
transformation.at<float>(0,3) = tvecs[0][0];
transformation.at<float>(1,3) = tvecs[0][1];
transformation.at<float>(2,3) = tvecs[0][2];
transformation = transformation.inv();

// back rot and translation vectors
cv::Mat rvec, tvec(3,1,CV_32F);
cv::rodrigues(transformation(cv::Rect(0,0,3,3)), rvec);
tvec.at<float>(0) = transformation.at<float>(0,3);
tvec.at<float>(1) =transformation.at<float>(1,3);
tvec.at<float>(2) =transformation.at<float>(2,3);

cv::projectPoints(pointsToProject, rvec, tvec, cameraMatrix, distCoeffs, projectedPoints );
double amountOfPixelsPerMeter = cv::norm(projectedPoints[0]-projectedPoints[1]);

但是,这将在应用外部物体之前产生一米的距离,因此即使它在x轴上,它也可能因旋转而有所不同。

我希望这会有所帮助,即使没有发表评论。我大部分内容是从脑海中写出来的,所以可能有错别字。