对鱼眼图像执行校准 - 取消鱼眼效果

时间:2016-02-10 14:02:09

标签: c++ opencv distortion fisheye

我目前正在使用带有c ++的opencv库,我的目标是取消对图像的鱼眼效果(“make it plane”) 我正在使用函数“undistortImage”取消效果,但我需要先执行相机校准才能找到参数K,Knew和D,但我并不完全理解文档(链接:{{3} })。 根据我的理解,我应该给出两个点列表,并且“校准”功能应该返回我需要的数组。所以我的问题如下:给定一个鱼眼图像,我该如何选择两个点列表来获得结果?这是目前我的代码,非常基本,只需拍照,显示,执行不失真并显示新图像。矩阵中的元素是随机的,因此目前结果并不像预期的那样。谢谢你的回答。

#include "opencv2\core\core.hpp"
#include "opencv2\highgui\highgui.hpp"
#include "opencv2\calib3d\calib3d.hpp"
#include <stdio.h>
#include <iostream>


using namespace std;
using namespace cv;

int main(){

    cout << " Usage: display_image ImageToLoadAndDisplay" << endl;
    Mat image;
    image = imread("C:/Users/Administrator/Downloads/eiffel.jpg", CV_LOAD_IMAGE_COLOR);   // Read the file
    if (!image.data)                              // Check for invalid input
    {
        cout << "Could not open or find the image" << endl;
        return -1;
    }
    cout << "Input image depth: " << image.depth() << endl;

    namedWindow("Display window", WINDOW_AUTOSIZE);// Create a window for display.
    imshow("Display window", image);                   // Show our image inside it.

    Mat Ka = Mat::eye(3, 3, CV_64F); // Creating distortion matrix
    Mat Da = Mat::ones(1, 4, CV_64F);
    Mat dstImage(image.rows, image.cols, CV_32F);

    cout << "K matrix depth: " << Ka.depth() << endl;
    cout << "D matrix depth: " << Da.depth() << endl;

    Mat Knew = Mat::eye(3, 3, CV_64F);
    std::vector<cv::Vec3d> rvec;
    std::vector<cv::Vec3d> tvec;
    int flag = 0; 
    std::vector<Point3d> objectPoints1 = { Point3d(0,0,0),  Point3d(1,1,0),  Point3d(2,2,0), Point3d(3,3,0), Point3d(4,4,0), Point3d(5,5,0), 
        Point3d(6,6,0),  Point3d(7,7,0),  Point3d(3,0,0), Point3d(4,1,0), Point3d(5,2,0), Point3d(6,3,0), Point3d(7,4,0),  Point3d(8,5,0),  Point3d(5,4,0), Point3d(0,7,0), Point3d(9,7,0), Point3d(9,0,0), Point3d(4,3,0), Point3d(7,2,0)};
    std::vector<Point2d> imagePoints1 = { Point(107,84),  Point(110,90),  Point(116,96), Point(126,107), Point(142,123), Point(168,147),
        Point(202,173),  Point(232,192),  Point(135,69), Point(148,73), Point(165,81), Point(189,93), Point(219,112),  Point(248,133),  Point(166,119), Point(96,183), Point(270,174), Point(226,56), Point(144,102), Point(206,75) };

    std::vector<std::vector<cv::Point2d> > imagePoints(1);
    imagePoints[0] = imagePoints1;
    std::vector<std::vector<cv::Point3d> > objectPoints(1);
    objectPoints[0] = objectPoints1;
    fisheye::calibrate(objectPoints, imagePoints, image.size(), Ka, Da, rvec, tvec, flag); // Calibration
    cout << Ka<< endl;
    cout << Da << endl;
    fisheye::undistortImage(image, dstImage, Ka, Da, Knew); // Performing distortion
    namedWindow("Display window 2", WINDOW_AUTOSIZE);// Create a window for display.
    imshow("Display window 2", dstImage);                   // Show our image inside it.

    waitKey(0);                                          // Wait for a keystroke in the window
    return 0;
}

1 个答案:

答案 0 :(得分:1)

要使用cv::fisheye::calibrate进行校准,您必须提供

objectPoints    vector of vectors of calibration pattern points in the calibration pattern coordinate space. 

这意味着提供点的已知真实世界坐标(必须是与imagePoints中的坐标相对应的点),但是您可以任意选择坐标系位置(但是使用carthesian),因此您必须知道您的对象 - 例如平面测试模式。

imagePoints vector of vectors of the projections of calibration pattern points

这些点必须与objectPoints中的点相同,但在图像坐标中给出,因此对象点的投影会碰到图像(从图像中读取/提取坐标)。

例如,如果您的相机拍摄了此图像(取自here):

image of a testpattern, captured by a fisheye camera

你必须知道你的测试模式的尺寸(最大比例),例如你可以选择左上角的左上角为位置(0,0,0),右上角左上角的正方形为(1,0,0),左上角的左下角为(1,1,0),因此整个测试图案将被放置在xy平面上

然后你可以提取这些对应关系:

pixel        real-world
(144,103)    (4,3,0)
(206,75)     (7,2,0)
(109,151)    (2,5,0)
(253,159)    (8,6,0)

这些点(标记为红色):

enter image description here

像素位置可以是您的imagePoints列表,而真实世界的位置可能是您的objectPoints列表。

这会回答你的问题吗?