Un-Distort从Leap运动相机接收的原始图像

时间:2014-11-02 10:17:01

标签: opencv image-processing shader distortion leap-motion

我已经在这个飞跃很长一段时间了。 2.1。+ SDK版本允许我们访问摄像头并获取原始图像。我想用OpenCV的那些图像进行方形/圆形检测和填充......问题是我无法使这些图像失真。我阅读了文档,但并不完全明白他们的意思。这是我在前进之前需要正确理解的一件事

        distortion_data_ = image.distortion();
        for (int d = 0; d < image.distortionWidth() * image.distortionHeight(); d += 2)
        {
            float dX = distortion_data_[d];
            float dY = distortion_data_[d + 1];
            if(!((dX < 0) || (dX > 1)) && !((dY < 0) || (dY > 1)))
            {
               //what do i do now to undistort the image?
            }
        }
        data = image.data();
        mat.put(0, 0, data);
        //Imgproc.Canny(mat, mat, 100, 200);
        //mat = findSquare(mat);
        ok.showImage(mat);    

在文档中它说的是这样的&#34; 校准图可用于校正由于透镜曲率和其他缺陷引起的图像失真。地图是64x64点的网格。每个点由两个32位值组成....(其余的在开发网站上)&#34;

有人可以详细解释这个,或者,或者,只是发布java代码以解除图片的影响,给我一个输出的MAT图像,这样我就可以继续处理了(如果可能的话,我仍然喜欢一个好的解释)< / p>

2 个答案:

答案 0 :(得分:1)

好的,我没有飞跃相机来测试这一切,但这就是我对文档的理解:

校准图不保持偏移而是保持全点位置。一个条目说明必须放置像素的位置。这些值映射在0和1之间,这意味着您必须按实际图像宽度和高度对它们进行多次显示。

明确解释的是,像素位置如何映射到校准图的64 x 64位置。我假设它的方式相同:640像素宽度映射到64像素宽度,240像素高度映射到64像素高度。

一般来说,要从640 x 240像素位置(pX,pY)移动到未失真位置,您将:

  1. 计算校准地图中的相应像素位置:float cX = pX/640.0f * 64.0f; float cY = pY/240.0f * 64.0f;
  2. (cX,cY)现在是校准图中该像素的位置。您必须在两个像素位置之间进行插值,但现在我只解释如何在校准图(cX', cY') = rounded locations of (cX, cY)中继续进行离散位置。
  3. 读取校准图中的x和y值:dX,dY,如文档中所示。您必须通过以下方式计算数组中的位置:d = dY*calibrationMapWidth*2 + dX*2;
  4. dX和dY是介于0和1之间的值(如果不是:不要失真,因为没有可用的失真。要找出真实图像中的像素位置,请乘以图像大小:uX = dX*640; uY = dY*240; < / LI>
  5. 将您的像素设置为未失真的值:undistortedImage(pX,pY) = distortedImage(uX,uY);
  6. 但是校准图中没有离散点位置,因此必须进行插值。我给你举个例子:

    让(cX,cY)=(13.7,10.4)

    所以你从校准图中读取了四个值:

    1. calibMap(13,10)=(dX1,dY1)
    2. calibMap(14,10)=(dX2,dY2)
    3. calibMap(13,11)=(dX3,dY3)
    4. calibMap(14,11)=(dX4,dY4)
    5. 现在你(13.7,10.4)的未失真像素位置是(每个都乘以640或240以获得uX1,uY1,uX2等):

      // interpolate in x direction first:
      float tmpUX1 = uX1*0.3 + uX2*0.7
      float tmpUY1 = uY1*0.3 + uY2*0.7
      
      float tmpUX2 = uX3*0.3 + uX4*0.7
      float tmpUY2 = uY3*0.3 + uY4*0.7
      
      // now interpolate in y direction
      float combinedX = tmpUX1*0.6 + tmpUX2*0.4
      float combinedY = tmpUY1*0.6 + tmpUY2*0.4
      

      你不失真的一点是:

      undistortedImage(pX,pY) = distortedImage(floor(combinedX+0.5),floor(combinedY+0.5));或插入像素值。

      希望这有助于基本理解。我将尽快添加openCV重映射代码!对我来说唯一不清楚的一点是,pX / Y和cX / Y之间的映射是否正确,导致文档中没有明确说明。

      这是一些代码。你可以跳过第一部分,我假装失真并创建地图,这是你的初始状态。

      使用openCV很简单,只需将校准贴图调整为图像大小,然后将所有值与分辨率相乘。好的是,openCV自动执行插值&#34;在调整大小时。

      int main()
      {
          cv::Mat input = cv::imread("../Data/Lenna.png");
      
          cv::Mat distortedImage = input.clone();
      
          // now i fake some distortion:
          cv::Mat transformation = cv::Mat::eye(3,3,CV_64FC1);
          transformation.at<double>(0,0) = 2.0;
          cv::warpPerspective(input,distortedImage,transformation,input.size());
      
      
      
          cv::imshow("distortedImage", distortedImage);
          //cv::imwrite("../Data/LenaFakeDistorted.png", distortedImage);
      
          // now fake a calibration map corresponding to my faked distortion:
          const unsigned int cmWidth = 64;
          const unsigned int cmHeight = 64;
      
          // compute the calibration map by transforming image locations to values between 0 and 1 for legal positions.
          float calibMap[cmWidth*cmHeight*2];
          for(unsigned int y = 0; y < cmHeight; ++y)
              for(unsigned int x = 0; x < cmWidth; ++x)
              {
                  float xx = (float)x/(float)cmWidth;
                  xx = xx*2.0f; // this if from my fake distortion... this gives some values bigger than 1
                  float yy = (float)y/(float)cmHeight;
      
                  calibMap[y*cmWidth*2+ 2*x] = xx;
                  calibMap[y*cmWidth*2+ 2*x+1] = yy;
              }
      
      
          // NOW you have the initial situation of your scenario: calibration map and distorted image...
      
          // compute the image locations of calibration map values:
          cv::Mat cMapMatX = cv::Mat(cmHeight, cmWidth, CV_32FC1);
          cv::Mat cMapMatY = cv::Mat(cmHeight, cmWidth, CV_32FC1);
          for(int j=0; j<cmHeight; ++j)
              for(int i=0; i<cmWidth; ++i)
              {
                  cMapMatX.at<float>(j,i) = calibMap[j*cmWidth*2 +2*i];
                  cMapMatY.at<float>(j,i) = calibMap[j*cmWidth*2 +2*i+1];
              }
      
          //cv::imshow("mapX",cMapMatX);
          //cv::imshow("mapY",cMapMatY);
      
      
          // interpolate those values for each of your original images pixel:
          // here I use linear interpolation, you could use cubic or other interpolation too.
          cv::resize(cMapMatX, cMapMatX, distortedImage.size(), 0,0, CV_INTER_LINEAR);
          cv::resize(cMapMatY, cMapMatY, distortedImage.size(), 0,0, CV_INTER_LINEAR);
      
      
          // now the calibration map has the size of your original image, but its values are still between 0 and 1 (for legal positions)
          // so scale to image size:
          cMapMatX = distortedImage.cols * cMapMatX;
          cMapMatY = distortedImage.rows * cMapMatY;
      
      
          // now create undistorted image:
          cv::Mat undistortedImage = cv::Mat(distortedImage.rows, distortedImage.cols, CV_8UC3);
          undistortedImage.setTo(cv::Vec3b(0,0,0));   // initialize black
      
          //cv::imshow("undistorted", undistortedImage);
      
          for(int j=0; j<undistortedImage.rows; ++j)
              for(int i=0; i<undistortedImage.cols; ++i)
              {
                  cv::Point undistPosition;
                  undistPosition.x =(cMapMatX.at<float>(j,i)); // this will round the position, maybe you want interpolation instead
                  undistPosition.y =(cMapMatY.at<float>(j,i));
      
                  if(undistPosition.x >= 0 && undistPosition.x < distortedImage.cols 
                      && undistPosition.y >= 0 && undistPosition.y < distortedImage.rows)
      
                  {
                      undistortedImage.at<cv::Vec3b>(j,i) = distortedImage.at<cv::Vec3b>(undistPosition);
                  }
      
              }
      
          cv::imshow("undistorted", undistortedImage);
          cv::waitKey(0);
          //cv::imwrite("../Data/LenaFakeUndistorted.png", undistortedImage);
      }
      
      
      cv::Mat SelfDescriptorDistances(cv::Mat descr)
      {
          cv::Mat selfDistances = cv::Mat::zeros(descr.rows,descr.rows, CV_64FC1);
          for(int keyptNr = 0; keyptNr < descr.rows; ++keyptNr)
          {
              for(int keyptNr2 = 0; keyptNr2 < descr.rows; ++keyptNr2)
              {
                  double euclideanDistance = 0;
                  for(int descrDim = 0; descrDim < descr.cols; ++descrDim)
                  {
                      double tmp = descr.at<float>(keyptNr,descrDim) - descr.at<float>(keyptNr2, descrDim);
                      euclideanDistance += tmp*tmp;
                  }
      
                  euclideanDistance = sqrt(euclideanDistance);
                  selfDistances.at<double>(keyptNr, keyptNr2) = euclideanDistance;
              }
      
          }
          return selfDistances;
      }
      

      我使用它作为输入并伪造重新映射/失真,我从中计算我的校准垫:

      输入:

      enter image description here

      伪造失真:

      enter image description here

      使用地图取消图片的分类:

      enter image description here

      TODO:在这些计算之后使用带有这些值的opencv映射来执行更快的重映射。

答案 1 :(得分:0)

以下是不使用OpenCV的示例。以下似乎比使用Leap :: Image :: warp()方法更快(可能是由于使用warp()时额外的函数调用开销):

float destinationWidth = 320;
float destinationHeight = 120;
unsigned char destination[(int)destinationWidth][(int)destinationHeight];

//define needed variables outside the inner loop
float calX, calY, weightX, weightY, dX1, dX2, dX3, dX4, dY1, dY2, dY3, dY4, dX, dY;
int x1, x2, y1, y2, denormalizedX, denormalizedY;
int x, y;

const unsigned char* raw = image.data();
const float* distortion_buffer = image.distortion();

//Local variables for values needed in loop
const int distortionWidth = image.distortionWidth();
const int width = image.width();
const int height = image.height();

for (x = 0; x < destinationWidth; x++) {
    for (y = 0; y < destinationHeight; y++) {
        //Calculate the position in the calibration map (still with a fractional part)
        calX = 63 * x/destinationWidth;
        calY = 63 * y/destinationHeight;
        //Save the fractional part to use as the weight for interpolation
        weightX = calX - truncf(calX);
        weightY = calY - truncf(calY);

        //Get the x,y coordinates of the closest calibration map points to the target pixel
        x1 = calX; //Note truncation to int
        y1 = calY;
        x2 = x1 + 1;
        y2 = y1 + 1;

        //Look up the x and y values for the 4 calibration map points around the target
        // (x1, y1)  ..  .. .. (x2, y1)
        //    ..                 ..
        //    ..    (x, y)       ..
        //    ..                 ..
        // (x1, y2)  ..  .. .. (x2, y2)
        dX1 = distortion_buffer[x1 * 2 + y1 * distortionWidth];
        dX2 = distortion_buffer[x2 * 2 + y1 * distortionWidth];
        dX3 = distortion_buffer[x1 * 2 + y2 * distortionWidth];
        dX4 = distortion_buffer[x2 * 2 + y2 * distortionWidth];
        dY1 = distortion_buffer[x1 * 2 + y1 * distortionWidth + 1];
        dY2 = distortion_buffer[x2 * 2 + y1 * distortionWidth + 1];
        dY3 = distortion_buffer[x1 * 2 + y2 * distortionWidth + 1];
        dY4 = distortion_buffer[x2 * 2 + y2 * distortionWidth + 1];

        //Bilinear interpolation of the looked-up values:
        // X value
        dX = dX1 * (1 - weightX) * (1- weightY) + dX2 * weightX * (1 - weightY) + dX3 * (1 - weightX) * weightY + dX4 * weightX * weightY;

        // Y value
        dY = dY1 * (1 - weightX) * (1- weightY) + dY2 * weightX * (1 - weightY) + dY3 * (1 - weightX) * weightY + dY4 * weightX * weightY;

        // Reject points outside the range [0..1]
        if((dX >= 0) && (dX <= 1) && (dY >= 0) && (dY <= 1)) {
            //Denormalize from [0..1] to [0..width] or [0..height]
            denormalizedX = dX * width;
            denormalizedY = dY * height;

            //look up the brightness value for the target pixel
            destination[x][y] = raw[denormalizedX + denormalizedY * width];
        } else {
            destination[x][y] = -1;
        }
    }
}