我正在尝试使用OpenCV量化相机校准的准确性。在我的程序中,我正在阅读棋盘图案的图像,并调用calibrateCamera函数来初步猜测我的相机内部和外部。我知道只使用一个图像不会产生完美的校准,并且calibrateCamera会返回重投影错误。不过,我想使用projectPoints功能,在校准板上获取检测到的角点的图像点,以便进一步处理。我使用下面的代码进行校准,但是当它尝试运行projectPoints函数时,程序在运行时崩溃。如果我删除函数调用代码工作正常。
Mat image_;
Mat gray_image_;
Size chessboard_size_;
vector<Point2f> corners_;
vector< vector< Point2f> > imagePoints_;
vector< Point2f> imagePointsProjected_;
vector< vector< Point3f> > objectPoints_;
bool corners_found;
float measure_ = 35;
chessboard_size_ = Size(CHESSBOARD_INTERSECTIONS_HORIZONTAL, CHESSBOARD_INTERSECTIONS_VERTICAL);
// image of type CV_8UC3 is read, with 8 bit & 3 channels
image_ = imread("/home/fes1rng/left.png");
if(!image_.data )
{
printf( "No image data \n" );
return;
}
// image is converted to grayscale, afterwards it is of type CV_8UC1
cvtColor(image_, gray_image_, CV_RGB2GRAY);
// detect corners and draw them
corners_found = findChessboardCorners(gray_image_, Size(CHESSBOARD_INTERSECTIONS_HORIZONTAL, CHESSBOARD_INTERSECTIONS_VERTICAL), corners_);
if (corners_found)
{
cornerSubPix(gray_image_, corners_, Size(11, 11), Size(-1, -1), TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
drawChessboardCorners(image_, Size(CHESSBOARD_INTERSECTIONS_HORIZONTAL, CHESSBOARD_INTERSECTIONS_VERTICAL), corners_, corners_found);
}
vector< Point2f> v_tImgPT;
vector< Point3f> v_tObjPT;
//save 2d coordinate and world coordinate
for(int j=0; j< corners_.size(); ++j)
{
Point2d tImgPT;
Point3d tObjPT;
tImgPT.x = corners_[j].x;
tImgPT.y = corners_[j].y;
tObjPT.x = j%CHESSBOARD_INTERSECTIONS_HORIZONTAL*measure_;
tObjPT.y = j/CHESSBOARD_INTERSECTIONS_HORIZONTAL*measure_;
tObjPT.z = 0;
v_tImgPT.push_back(tImgPT);
v_tObjPT.push_back(tObjPT);
}
imagePoints_.push_back(v_tImgPT);
objectPoints_.push_back(v_tObjPT);
Mat rvec(3,1, CV_64FC1);
Mat tvec(3,1, CV_64FC1);
vector<Mat> rvecs;
vector<Mat> tvecs;
rvecs.push_back(rvec);
tvecs.push_back(tvec);
Mat intrinsic_Matrix(3,3, CV_64FC1);
Mat distortion_coeffs(8,1, CV_64FC1);
calibrateCamera(objectPoints_, imagePoints_, image_.size(), intrinsic_Matrix, distortion_coeffs, rvecs, tvecs);
projectPoints(objectPoints_, rvecs, tvecs, intrinsic_Matrix, distortion_coeffs, imagePointsProjected_);
cv::namedWindow( "Display Image", CV_WINDOW_AUTOSIZE );
cv::imshow( "Display Image", image_ );
waitKey(0);
错误消息是:
OpenCV Error: Assertion failed (0 <= i && i < (int)vv.size()) in getMat, file /build/buildd/opencv-2.4.8+dfsg1/modules/core/src/matrix.cpp, line 977
terminate called after throwing an instance of 'cv::Exception'
what(): /build/buildd/opencv-2.4.8+dfsg1/modules/core/src/matrix.cpp:977: error: (-215) 0 <= i && i < (int)vv.size() in function getMat
由于错误发生在运行时和子函数调用中,我认为它是由矩阵的错误数据类型引起的。但是由于函数projectPoints在calibrateCamera中内部使用,我很困惑为什么具有相同参数的单个函数调用导致错误。
答案 0 :(得分:2)
作为第一个参数,projectPoints
等待std::vector<cv::Point3f>
而不是std::vector<std::vector<cv::Point3f>>
。
答案 1 :(得分:1)
使用以下表达式解决了问题!
projectPoints(objectPoints_.front(), rvecs.front(), tvecs.front(), intrinsic_Matrix, distortion_coeffs, imagePointsProjected_);