在iOS上使用openCV进行高动态范围成像会产生乱码输出

时间:2015-12-01 00:23:39

标签: ios swift opencv hdr

我正在尝试在iOS上使用openCV 3来生成多次曝光的HDR图像,最终将作为EXR文件输出。当我尝试创建HDR图像时,我注意到输出乱码。我认为尝试创建相机响应是一个错误,我从头开始并将openCV上的HDR成像教程材料改编为iOS,但它产生了类似的结果。以下C ++代码返回乱码图像:

cv::Mat mergeToHDR (vector<Mat>& images, vector<float>& times)
{
    imgs = images;
    Mat response;
    //Ptr<CalibrateDebevec> calibrate = createCalibrateDebevec();
    //calibrate->process(images, response, times);

    Ptr<CalibrateRobertson> calibrate = createCalibrateRobertson();
    calibrate->process(images, response, times);

    // create HDR
    Mat hdr;
    Ptr<MergeDebevec> merge_debevec = createMergeDebevec();
    merge_debevec->process(images, hdr, times, response);

    // create LDR
    Mat ldr;
    Ptr<TonemapDurand> tonemap = createTonemapDurand(2.2f);
    tonemap->process(hdr, ldr);

    // create fusion
    Mat fusion;
    Ptr<MergeMertens> merge_mertens = createMergeMertens();
    merge_mertens->process(images, fusion);

    /*
    Uncomment what kind of tonemapped image or hdr to return
    Returning one of the images in the array produces ungarbled output
    so we know the problem is unlikely with the openCV to UIImage conversion
    */

    //give back one of the images from the image array
    //return images[0];

    //give back one of the hdr images
    return fusion * 255;
    //return ldr * 255;
    //return hdr
}

这就是图像的样子:

Bad image output

我已经分析了图像,尝试了各种颜色空间转换,但数据似乎是垃圾。

openCV框架是openCV.org网站上最新编译的3.0.0版本。 RC和alpha产生相同的结果,并且不会构建当前版本(对于iOS或OSX)。我认为我接下来的步骤是尝试让框架从头开始编译,或者让示例在另一个平台上运行,以查看问题是特定于平台还是具有openCV HDR功能本身。但在我这样做之前,我想我会把问题放在堆栈溢出上,看看是否有人遇到过同样的问题,或者我是否遗漏了一些非常明显的事情。

我已将示例xcode项目上传到此处:

https://github.com/artandmath/openCVHDRSwiftExample

让openCV与swift合作是在Github的用户代工厂的帮助下进行的。

1 个答案:

答案 0 :(得分:2)

感谢代工厂指出我正确的方向。 UIImage + OpenCV类扩展期望每个颜色通道8位,但HDR功能每个通道吐出32位(这实际上是我想要的)。在将图像矩阵转换为UIImage之前,将图像矩阵转换回每通道8位用于显示目的可以解决问题。

以下是生成的图像:

The expected result!

这是固定功能:

cv::Mat mergeToHDR (vector<Mat>& images, vector<float>& times)
{
    imgs = images;
    Mat response;
    //Ptr<CalibrateDebevec> calibrate = createCalibrateDebevec();
    //calibrate->process(images, response, times);

    Ptr<CalibrateRobertson> calibrate = createCalibrateRobertson();
    calibrate->process(images, response, times);

    // create HDR
    Mat hdr;
    Ptr<MergeDebevec> merge_debevec = createMergeDebevec();
    merge_debevec->process(images, hdr, times, response);

    // create LDR
    Mat ldr;
    Ptr<TonemapDurand> tonemap = createTonemapDurand(2.2f);
    tonemap->process(hdr, ldr);

    // create fusion
    Mat fusion;
    Ptr<MergeMertens> merge_mertens = createMergeMertens();
    merge_mertens->process(images, fusion);

    /*
     Uncomment what kind of tonemapped image or hdr to return
     Convert back to 8-bits per channel because that is what
     the UIImage+OpenCV class extension is expecting
    */


    // tone mapped
    /*
    Mat ldr8bit;
    ldr = ldr * 255;
    ldr.convertTo(ldr8bit, CV_8U);
    return ldr8bit;
    */

    // fusion
    Mat fusion8bit;
    fusion = fusion * 255;
    fusion.convertTo(fusion8bit, CV_8U);
    return fusion8bit;

    // hdr
    /*
    Mat hdr8bit;
    hdr = hdr * 255;
    hdr.convertTo(hdr8bit, CV_8U);
    return hdr8bit;
    */
}

另外,这是基于opencv.org上iOS部分的iOS教程之一,对OpenCV + UIImage类扩展中的initWithCVMat方法进行了修复:

http://docs.opencv.org/2.4/doc/tutorials/ios/image_manipulation/image_manipulation.html#opencviosimagemanipulation

当使用浮点数据创建新的CGImageRef时,需要明确告知它需要浮点数据,并且需要反转来自openCV的图像数据的字节顺序。现在iOS / Quartz有浮动数据!它有点像hacky修复,因为该方法仍然只处理每个通道8位或32位或alpha,并且没有考虑可以从Mat传递到UIImage的每种图像。 / p>

- (id)initWithCVMat:(const cv::Mat&)cvMat
{
    NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
    CGColorSpaceRef colorSpace;

    size_t elemSize = cvMat.elemSize();
    size_t elemSize1 = cvMat.elemSize1();

    size_t channelCount = elemSize/elemSize1;
    size_t bitsPerChannel = 8 * elemSize1;
    size_t bitsPerPixel = bitsPerChannel * channelCount;

    if (channelCount == 1) {
        colorSpace = CGColorSpaceCreateDeviceGray();
    } else {
        colorSpace = CGColorSpaceCreateDeviceRGB();
    }

    // Tell CGIImageRef different bitmap info if handed 32-bit
    uint32_t bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;

    if (bitsPerChannel == 32 ){
        bitmapInfo = kCGImageAlphaNoneSkipLast | kCGBitmapFloatComponents | kCGBitmapByteOrder32Little;
    }

    CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);

    // Creating CGImage from cv::Mat
    CGImageRef imageRef = CGImageCreate(cvMat.cols,                                 //width
                                        cvMat.rows,                                 //height
                                        bitsPerChannel,                             //bits per component
                                        bitsPerPixel,                               //bits per pixel
                                        cvMat.step[0],                              //bytesPerRow
                                        colorSpace,                                 //colorspace
                                        bitmapInfo,                                 // bitmap info
                                        provider,                                   //CGDataProviderRef
                                        NULL,                                       //decode
                                        false,                                      //should interpolate
                                        kCGRenderingIntentDefault                   //intent
                                        );                     

    // Getting UIImage from CGImage
    self = [self initWithCGImage:imageRef];
    CGImageRelease(imageRef);
    CGDataProviderRelease(provider);
    CGColorSpaceRelease(colorSpace);

    return self;
}