拉普拉斯金字塔产生奇怪的结果?

时间:2018-11-09 16:08:56

标签: java android matlab opencv image-processing

我正在尝试根据以下MATLAB代码制作拉普拉斯金字塔:

% I is the image array, where its size is r x c x 3(<-RGB channels).
function pyr = laplacian_pyramid(I,nlev)

r = size(I,1);
c = size(I,2);

% recursively build pyramid
pyr = cell(nlev,1);
filter = [.0625, .25, .375, .25, .0625];
J = I;
for l = 1:nlev - 1
    % apply low pass filter, and downsample
    I = downsample(J,filter);
    odd = 2*size(I) - size(J);  % for each dimension, check if the upsampled version has to be odd
    % in each level, store difference between image and upsampled low pass version
    pyr{l} = J - upsample(I,odd,filter);
    J = I; % continue with low pass image
end
pyr{nlev} = J; % the coarest level contains the residual low pass image

downsample()如下所示:

function R = downsample(I, filter)

border_mode = 'symmetric';

% low pass, convolve with separable filter
R = imfilter(I,filter,border_mode);     %horizontal
R = imfilter(R,filter',border_mode);    %vertical

% decimate
r = size(I,1);
c = size(I,2);
R = R(1:2:r, 1:2:c, :);  

这是upsample()

function R = upsample(I,odd,filter)

% increase resolution
I = padarray(I,[1 1 0],'replicate'); % pad the image with a 1-pixel border
r = 2*size(I,1);
c = 2*size(I,2);
k = size(I,3);
R = zeros(r,c,k);
R(1:2:r, 1:2:c, :) = 4*I; % increase size 2 times; the padding is now 2 pixels wide

% interpolate, convolve with separable filter
R = imfilter(R,filter);     %horizontal
R = imfilter(R,filter');    %vertical

% remove the border
R = R(3:r - 2 - odd(1), 3:c - 2 - odd(2), :);

此MATLAB代码正常运行。只需关注downsample()函数,因为我的函数的OpenCV版本就是发生问题的原因。


现在,我尝试使用此MATLAB代码的OpenCV版本:

private List<Mat> laplacianPyramid(Mat mat,int depth)
    {
        //mat.type() is CV_8UC3 (16).
        List<Mat> pyramid = new ArrayList<Mat>();
        //I make a clone so I don't ruin the original matrix.
        Mat clone = mat.clone();
        Mat J = clone;
        for(int i=0;i<=depth-2;i++)
        {
            clone = image_reduce(J);
            Mat temp = new Mat();
            Point odd = new Point(clone.size().height*2 - J.height(), clone.size().width*2 - J.width());
            Core.subtract(J, image_expand(clone, odd), temp);
            pyramid.add(temp);
            J = clone;
        }
        pyramid.add(J);
        return pyramid;
    }

这是我的upsample()的OpenCV版本:

private Mat image_expand(Mat image, Point odd){
        //I make a clone so I don't ruin the original image.
        Mat imageClone = image.clone();
        copyMakeBorder(imageClone, imageClone, 1, 1, 1, 1, BORDER_REPLICATE);
        Mat kernelX = getGaussianKernel();
        Mat kernelY = new Mat();
        Core.transpose(kernelX, kernelY);
        Mat UIntVer = new Mat(imageClone.size(), CV_8UC3);
        imageClone.convertTo(UIntVer, CV_8UC3);
        Imgproc.resize(UIntVer, UIntVer, new Size(imageClone.width()*2, imageClone.height()*2), 0, 0, Imgproc.INTER_NEAREST);

        //Now implement the zero padding between each columns and rows, just like the MATLAB version.
        Mat mask = new Mat(2,2, CV_8UC1);
        int[][] array = new int[2][2];
        array[0][0] = 255;
        array[1][0] = 0;
        array[0][1] = 0;
        array[1][1] = 0;
        for (int i=0; i<2; i++) {
            for (int j = 0; j < 2; j++) {
                mask.put(i, j, array[i][j]);
            }
        }
        //mask becomes twice the size of image.
        Mat biggerMask = new Mat();
        Core.repeat(mask, imageClone.height(), imageClone.width(), biggerMask);
        List<Mat> rgbUIntVer = new ArrayList<Mat>();
        Core.split(UIntVer,rgbUIntVer);
        Core.bitwise_and(rgbUIntVer.get(0), biggerMask, rgbUIntVer.get(0));
        Core.bitwise_and(rgbUIntVer.get(1), biggerMask, rgbUIntVer.get(1));
        Core.bitwise_and(rgbUIntVer.get(2), biggerMask, rgbUIntVer.get(2));
        Core.merge(rgbUIntVer, UIntVer);

        int r = imageClone.height()*2;
        int c = imageClone.width()*2;
        Mat result = new Mat(r, c, CV_32FC3);
        UIntVer.convertTo(UIntVer, CV_32FC3);
        Scalar four = new Scalar(4);
        Core.multiply(UIntVer, four, UIntVer);
        Imgproc.sepFilter2D(UIntVer,result,-1,kernelX,kernelY,new Point(-1,-1) ,0,BORDER_DEFAULT);
        Rect roi = new Rect(2, 2, c-4-(int)odd.y, r-4-(int)odd.x);
        result = new Mat(result, roi);
        return result;
    }

此OpenCV版本代码的问题是结果只是纯蓝色。拉普拉斯金字塔应保存边缘检测结果,但示例结果如下所示。上面的图像是原始输入,下面的图像是所得金字塔的底楼。

enter image description here

我检查了输入图像是否正确读取。我的代码有些问题,但是我找不到位置。任何帮助将不胜感激!

我怀疑某些处理功能只能在R通道上工作,而不能在GB通道上工作。


我发现问题是image_expand()。它只是输出红色版本的图像,这解释了为什么结果图像偏蓝,因为结果图像是原始图像与image_expand()输出之间的差异。因此问题出在image_expand()


以防万一,这是我的getGaussianKernel()代码:

private Mat getGaussianKernel(){
        float[] kernel = new float[5];
        kernel[0]= 0.0625f;
        kernel[1]=0.25f;
        kernel[2]=0.375f;
        kernel[3]=0.25f;
        kernel[4]=0.0625f;
        Mat mat = new Mat(5,1,CV_32FC1);
        mat.put(0,0,kernel);
        return mat;
    }

1 个答案:

答案 0 :(得分:0)

我发现了问题。正如@Ander Biguri先生和@Cris Luengo先生(很抱歉,如果我假设您的性别)告诉我,问题出在

UIntVer.convertTo(UIntVer, CV_32FC3);
Scalar four = new Scalar(4);
Core.multiply(UIntVer, four, UIntVer);
image_expand()

。显然,要乘以标量值,必须对图像矩阵执行Core.split(),并对每个通道应用相乘,然后Core.merge()。这是因为Core.multiply()一次只能处理1个频道。我希望这可以帮助其他人在Core.multiply()上遇到麻烦。