在没有棋盘图像的情况下手动修正OpenCV中的桶形失真

时间:2014-10-28 07:34:23

标签: opencv imagemagick distortion lens

我从摄像机获取图像,无法拍摄棋盘图片并使用OpenCV计算校正矩阵。到目前为止,我使用imagemagick转换图像修正图像,选项'-distort Barrel“0.0 0.0 -0.035 1.1”'我在这里获得了试验和错误的参数。

现在我想在OpenCV中做到这一点,但我在网上找到的只是使用棋盘图像的自动修正。有没有机会应用一些简单的手动试验和错误镜头失真校正,就像我使用imagemagick一样?

3 个答案:

答案 0 :(得分:8)

好的,我想我明白了。在矩阵cam1,cam2中缺少图像中心(参见文档)。我添加了它并改变了焦距,以避免图像尺寸变化太大。这是代码:

  import numpy as np
  import cv2

  src    = cv2.imread("distortedImage.jpg")
  width  = src.shape[1]
  height = src.shape[0]

  distCoeff = np.zeros((4,1),np.float64)

  # TODO: add your coefficients here!
  k1 = -1.0e-5; # negative to remove barrel distortion
  k2 = 0.0;
  p1 = 0.0;
  p2 = 0.0;

  distCoeff[0,0] = k1;
  distCoeff[1,0] = k2;
  distCoeff[2,0] = p1;
  distCoeff[3,0] = p2;

  # assume unit matrix for camera
  cam = np.eye(3,dtype=np.float32)

  cam[0,2] = width/2.0  # define center x
  cam[1,2] = height/2.0 # define center y
  cam[0,0] = 10.        # define focal length x
  cam[1,1] = 10.        # define focal length y

  # here the undistortion will be computed
  dst = cv2.undistort(src,cam,distCoeff)

  cv2.imshow('dst',dst)
  cv2.waitKey(0)
  cv2.destroyAllWindows()

非常感谢你的帮助。

答案 1 :(得分:2)

如果您没有棋盘图案但是您知道失真系数,这是一种不会失真图像的方法。

由于我不知道您的桶形失真参数对应哪些系数(可能需要查看http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.htmlhttp://docs.opencv.org/modules/imgproc/doc/geometric_transformations.html#initundistortrectifymap,您必须尝试一下,或者其他人可以在这里提供帮助。

另一点是,我不确定openCV是否会自动处理float,double和double。如果情况并非如此,则此代码中可能存在错误(我不知道是假设是双精度还是单精度):

cv::Mat distCoeff;
distCoeff = cv::Mat::zeros(8,1,CV_64FC1);

// indices: k1, k2, p1, p2, k3, k4, k5, k6 
// TODO: add your coefficients here!
double k1 = 0;
double k2 = 0;
double p1 = 0;
double p2 = 0;
double k3 = 0;
double k4 = 0;
double k5 = 0;
double k6 = 0;

distCoeff.at<double>(0,0) = k1;
distCoeff.at<double>(1,0) = k2;
distCoeff.at<double>(2,0) = p1;
distCoeff.at<double>(3,0) = p2;
distCoeff.at<double>(4,0) = k3;
distCoeff.at<double>(5,0) = k4;
distCoeff.at<double>(6,0) = k5;
distCoeff.at<double>(7,0) = k6;




// assume unit matrix for camera, so no movement
cv::Mat cam1,cam2;
cam1 = cv::Mat::eye(3,3,CV_32FC1);
cam2 = cv::Mat::eye(3,3,CV_32FC1);
//cam2.at<float>(0,2) = 100;    // for testing a translation

// here the undistortion will be computed
cv::Mat map1, map2;
cv::initUndistortRectifyMap(cam1, distCoeff, cv::Mat(), cam2,  input.size(), CV_32FC1, map1, map2);

cv::Mat distCorrected;
cv::remap(input, distCorrected, map1, map2, cv::INTER_LINEAR);

答案 2 :(得分:1)

这是一种不失真的补充功能,可能是更快或更好的方法,但它有效:

void distort(const cv::Mat& src, cv::Mat& dst, const cv::Mat& cameraMatrix, const cv::Mat& distCoeffs)
{

  cv::Mat distort_x = cv::Mat(src.size(), CV_32F);
  cv::Mat distort_y = cv::Mat(src.size(), CV_32F);

  cv::Mat pixel_locations_src = cv::Mat(src.size(), CV_32FC2);

  for (int i = 0; i < src.size().height; i++) {
    for (int j = 0; j < src.size().width; j++) {
      pixel_locations_src.at<cv::Point2f>(i,j) = cv::Point2f(j,i);
    }
  }

  cv::Mat fractional_locations_dst = cv::Mat(src.size(), CV_32FC2);

  cv::undistortPoints(pixel_locations_src, pixel_locations_dst, cameraMatrix, distCoeffs);

  cv::Mat pixel_locations_dst = cv::Mat(src.size(), CV_32FC2);

  const float fx = cameraMatrix.at<double>(0,0);
  const float fy = cameraMatrix.at<double>(1,1);
  const float cx = cameraMatrix.at<double>(0,2);
  const float cy = cameraMatrix.at<double>(1,2);

  // is there a faster way to do this?
  for (int i = 0; i < fractional_locations_dst.size().height; i++) {
    for (int j = 0; j < fractional_locations_dst.size().width; j++) {
      const float x = fractional_locations_dst.at<cv::Point2f>(i,j).x*fx + cx;
      const float y = fractional_locations_dst.at<cv::Point2f>(i,j).y*fy + cy;
      pixel_locations_dst.at<cv::Point2f>(i,j) = cv::Point2f(x,y);
    }
  }

  cv::remap(src, dst, pixel_locations_dst, cv::Mat(), CV_INTER_LINEAR);
}