OpenCV - 如何将自适应阈值应用于iOS上的图像

时间:2017-05-01 05:10:27

标签: c++ ios objective-c opencv

我正在尝试将自适应阈值处理应用到A4纸的图像,如下所示:

我使用下面的代码来应用图像处理:

+ (UIImage *)processImageWithOpenCV:(UIImage*)inputImage {
    cv::Mat cvImage = [inputImage CVMat];
    cv::Mat res;
    cv::cvtColor(cvImage, cvImage, CV_RGB2GRAY);
    cvImage.convertTo(cvImage,CV_32FC1,1.0/255.0);
    CalcBlockMeanVariance(cvImage,res);
    res=1.0-res;
    res=cvImage+res;
    cv::threshold(res,res, 0.85, 1, cv::THRESH_BINARY);
    cv::resize(res, res, cv::Size(res.cols/2,res.rows/2));
    return [UIImage imageWithCVMat:cvImage];
}

void CalcBlockMeanVariance(cv::Mat Img,cv::Mat Res,float blockSide=13) // blockSide - the parameter (set greater for larger font on image)
{
cv::Mat I;
Img.convertTo(I,CV_32FC1);
Res=cv::Mat::zeros(Img.rows/blockSide,Img.cols/blockSide,CV_32FC1);
cv::Mat inpaintmask;
cv::Mat patch;
cv::Mat smallImg;
cv::Scalar m,s;

for(int i=0;i<Img.rows-blockSide;i+=blockSide)
{
    for (int j=0;j<Img.cols-blockSide;j+=blockSide)
    {
        patch=I(cv::Rect(j,i,blockSide,blockSide));
        cv::meanStdDev(patch,m,s);
        if(s[0]>0.01) // Thresholding parameter (set smaller for lower contrast image)
        {
            Res.at<float>(i/blockSide,j/blockSide)=m[0];
        }else
        {
            Res.at<float>(i/blockSide,j/blockSide)=0;
        }
    }
}

cv::resize(I,smallImg,Res.size());

cv::threshold(Res,inpaintmask,0.02,1.0,cv::THRESH_BINARY);

cv::Mat inpainted;
smallImg.convertTo(smallImg,CV_8UC1,255);

inpaintmask.convertTo(inpaintmask,CV_8UC1);
inpaint(smallImg, inpaintmask, inpainted, 5, cv::INPAINT_TELEA);

cv::resize(inpainted,Res,Img.size());
Res.convertTo(Res,CV_8UC3);

}

虽然输入的图像是灰度的,但它会输出一个黄色图像,如下所示:

我的假设是,在cv :: Mat和UIImage之间进行转换时,发生了导致彩色图像的事情,但我无法弄清楚如何解决这个问题。

**请忽略状态栏,因为这些图片是iOS应用的截图。

更新: 我尝试使用CV_8UC1代替CV_8UC3 Res.convertTo()添加cvtColor(Res, Res, CV_GRAY2BGR);,但仍然得到非常相似的结果。

这可能是导致这个问题的cv :: mat和UIImage之间的转换吗?

我希望我的图像如下所示。

2 个答案:

答案 0 :(得分:5)

您可以使用OpenCV框架并实现以下代码

 +(UIImage *)blackandWhite:(UIImage *)processedImage
    {
        cv::Mat original = [MMOpenCVHelper cvMatGrayFromAdjustedUIImage:processedImage];

        cv::Mat new_image = cv::Mat::zeros( original.size(), original.type() );

        original.convertTo(new_image, -1, 1.4, -50);
        original.release();

        UIImage *blackWhiteImage=[MMOpenCVHelper UIImageFromCVMat:new_image];
        new_image.release();

        return blackWhiteImage;
    }

+ (cv::Mat)cvMatGrayFromAdjustedUIImage:(UIImage *)image
{
    cv::Mat cvMat = [self cvMatFromAdjustedUIImage:image];
    cv::Mat grayMat;
    if ( cvMat.channels() == 1 ) {
        grayMat = cvMat;
    }
    else {
        grayMat = cv :: Mat( cvMat.rows,cvMat.cols, CV_8UC1 );
        cv::cvtColor( cvMat, grayMat, cv::COLOR_BGR2GRAY );
    }
    return grayMat;
}

+ (cv::Mat)cvMatFromAdjustedUIImage:(UIImage *)image
{
    CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
    CGFloat cols = image.size.width;
    CGFloat rows = image.size.height;

    cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels

    CGContextRef contextRef = CGBitmapContextCreate(cvMat.data,                 // Pointer to backing data
                                                    cols,                       // Width of bitmap
                                                    rows,                       // Height of bitmap
                                                    8,                          // Bits per component
                                                    cvMat.step[0],              // Bytes per row
                                                    colorSpace,                 // Colorspace
                                                    kCGImageAlphaNoneSkipLast |
                                                    kCGBitmapByteOrderDefault);

    CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
    CGContextRelease(contextRef);
    return cvMat;
}


+ (UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
    NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];

    CGColorSpaceRef colorSpace;

    if (cvMat.elemSize() == 1) {
        colorSpace = CGColorSpaceCreateDeviceGray();
    } else {
        colorSpace = CGColorSpaceCreateDeviceRGB();
    }

    CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);

    CGImageRef imageRef = CGImageCreate(cvMat.cols,                                     // Width
                                        cvMat.rows,                                     // Height
                                        8,                                              // Bits per component
                                        8 * cvMat.elemSize(),                           // Bits per pixel
                                        cvMat.step[0],                                  // Bytes per row
                                        colorSpace,                                     // Colorspace
                                        kCGImageAlphaNone | kCGBitmapByteOrderDefault,  // Bitmap info flags
                                        provider,                                       // CGDataProviderRef
                                        NULL,                                           // Decode
                                        false,                                          // Should interpolate
                                        kCGRenderingIntentDefault);                     // Intent

    UIImage *image = [[UIImage alloc] initWithCGImage:imageRef];
    CGImageRelease(imageRef);
    CGDataProviderRelease(provider);
    CGColorSpaceRelease(colorSpace);

    return image;
}

它为我工作检查文档的输出

答案 1 :(得分:3)

试试这个:

+ (UIImage *)processImageWithOpenCV:(UIImage*)inputImage {
    cv::Mat cvImage = [inputImage CVMat];
    threshold(cvImage, cvImage, 128, 255, cv::THRESH_BINARY);
    return [UIImage imageWithCVMat:cvImage];
}

结果图片: