MATLAB高效直方图查找

时间:2015-07-29 17:12:34

标签: matlab image-processing matrix histogram

我有一个大的三维矩阵(大约1000x1000x100),其中包含与标准化高分辨率直方图中的二进制位对应的值。第三个矩阵维度中的每个索引都有一个直方图(例如,示例维度的100个直方图)。

检查查找2D指数值的概率(即与标准化直方图中的bin相关的值)的最快方法是什么?

我现在的代码过于缓慢:

probs = zeros(rows, cols, dims);
for k = 1 : dims
    tmp = data(:,:,k);
    [h, centers] = hist(tmp, 1000);
    h = h / sum(h); % Normalize the histogram
    for r = 1 : rows
        for c = 1 : cols
            % Identify bin center closest to value
            [~, idx] = min(abs(centers - data(r, c, k)));
            probs(r,c,k) = h(idx);
        end
    end
end

For循环通常(尽管并不总是)比矢量化代码效率低,并且循环嵌套通常更糟。我怎么能用更少的循环来做这个,但也没有耗尽内存?我尝试了一些repmat调用来矢量化整个过程,但是我的MATLAB会话崩溃了1000x1000x1000x100矩阵。

注意:我只有MATLAB 2014a,所以虽然欢迎使用新histogram()功能的解决方案,但我仍然坚持使用hist()

这是一个小规模的演示示例,应该以可复制的方式运行:

rng(2); % Seed the RNG for repeatability
rows = 3;
cols = 3;
dims = 2;
data = repmat(1:3,3,1,2);
probs = zeros(rows, cols, dims);
for k = 1 : dims
    tmp = normrnd(0,1,1000,1);
    [h, centers] = hist(tmp);
    h = h / sum(h); % Normalize the histogram
    for r = 1 : rows
        for c = 1 : cols
            % Identify bin center closest to value
            [~, idx] = min(abs(centers - data(r, c, k)));
            probs(r,c,k) = h(idx);
        end
    end
end

当我运行上面的代码时,我得到了以下输出(这是合乎逻辑的,因为直方图是普通的高斯):

probs(:,:,1) =

0.1370    0.0570    0.0030
0.1370    0.0570    0.0030
0.1370    0.0570    0.0030


probs(:,:,2) =

0.1330    0.0450    0.0050
0.1330    0.0450    0.0050
0.1330    0.0450    0.0050

注意:我在下面的答案中找到了一个有效的解决方案。

2 个答案:

答案 0 :(得分:1)

我假设您有一个矩阵centersAll(包含dims行),其中包含每个第三维索引的直方图中心,以及类似的矩阵hAlldims包含直方图值的行。

centersAll重塑为第三和第四维,使用bsxfun计算差异,沿第四维最小化,并使用它来索引hAll

[~, idx] = min(abs(bsxfun(@minus, data, reshape(centersAll,1,1,dims,[]))), [], 4);
hAllt = hAll.'; %'
probs2 = hAllt(bsxfun(@plus, idx, reshape(0:dims-1, 1,1,[])*size(hAll,2)));

检查:

%// Data
clear all
rng(2); % Seed the RNG for repeatability
rows = 3;
cols = 3;
dims = 2;
data = repmat(1:3,3,1,2);
for k = 1 : dims
    tmp = normrnd(0,1,1000,1);
    [h, centers] = hist(tmp);
    h = h / sum(h); % Normalize the histogram                   
    centersAll(k,:) = centers;
    hAll(k,:) = h;
end

%// With loops
probs = zeros(rows, cols, dims);
for k = 1 : dims
    for r = 1 : rows
        for c = 1 : cols
            % Identify bin center closest to value
            centers = centersAll(k,:);
            h = hAll(k,:);
            [~, idx] = min(abs(centers - data(r, c, k)));
            probs(r,c,k) = h(idx);
        end
    end
end

%// Without loops
[~, idx] = min(abs(bsxfun(@minus, data, reshape(centersAll,1,1,dims,[]))), [], 4);
hAllt = hAll.'; %'
probs2 = hAllt(bsxfun(@plus, idx, reshape(0:dims-1, 1,1,[])*size(hAll,2)));

%// Check
probs==probs2

给出

ans(:,:,1) =
     1     1     1
     1     1     1
     1     1     1
ans(:,:,2) =
     1     1     1
     1     1     1
     1     1     1

答案 1 :(得分:0)

最佳解决方案:

不使用新的histogram()函数(即R2014b之前的所有版本),最好的方法是同时利用hist()函数和histc()函数

还有两种情况需要考虑:

  1. 对数据进行分箱,然后在直方图中查找相同的数据
  2. 在由不同数据形成的直方图中查找分类
  3. 第一种情况是更简单的情况。 histc()的一个很好的特性是它返回直方图和数据被分箱的直方图的索引。在这一点上,我们应该完成。唉!可悲的是,我们不是。由于histc()hist()背后的代码对数据的区别不同,我们最终会得到两个不同的直方图,具体取决于我们使用的直方图。原因似乎是histc()根据严格大于条件选择分类,而hist()使用大于或等于 - 来选择分类即可。结果,等效函数调用:

    % Using histc
    binEdges = linspace(min(tmp),max(tmp),numBins+1);
    [h1, indices] = histc(data, binEdges);
    
    % Using hist
    [h2, indices] = hist(data, numBins);
    

    导致不同的直方图:length(h1) - length(h2) = 1

    因此,要处理此问题,我们只需将h1的最后一个bin中的值添加到h1的倒数第二个bin中的值,然后关闭最后一个bin,并调整因此指数:

    % Account for "strictly greater than" bug that results in an extra bin
    
    h1(:, numBins) = h1(:, numBins) + h1(:, end); % Combine last two bins
    indices(indices == numBins + 1) = numBins; % Adjust indices to point to right spot
    h1 = h1(:, 1:end-1); % Lop off the extra bin
    

    现在,您留下的h1h2indices向量相匹配,对应于您的数据进入h1的位置。因此,您可以通过有效的索引而不是循环来查找概率信息。

    Runnable示例代码:

    rng(2); % Seed the RNG for repeatability
    
    % Generate some data
    numBins = 6;
    data = repmat(rand(1,5), 3, 1, 2);
    [rows, cols, dims] = size(data);
    N = rows*cols;
    
    % Bin all data into a histogram, keeping track of which bin each data point
    % gets mapped to
    h = zeros(dims, numBins + 1);
    indices = zeros(dims, N);
    for k = 1 : dims
        tmp = data(:,:,k);
        tmp = tmp(:)';
        binEdges = linspace(min(tmp),max(tmp),numBins+1);
        [h(k,:), indices(k,:)] = histc(tmp, binEdges);
    end
    
    % Account for "strictly greater than" bug that results in an extra bin
    h(:, numBins) = h(:, numBins) + h(:, end); % Add count in last bin to the second-to-last bin
    indices(indices == numBins + 1) = numBins; % Adjust indices accordingly
    h = h(:,1:end-1); % Lop off the extra bin
    h = h ./ repmat(sum(h,2), 1, numBins); % Normalize all histograms
    
    % Now we can efficiently look up probabilities by indexing instead of
    % looping
    for k = 1 : dims
        probs(:, :, k) = reshape(h(sub2ind(size(h), repmat(k, 1, size(indices, 2)), indices(k,:))), rows, cols);
    end
    probs
    

    在第二种情况下,查找更加困难,因为在直方图创建过程中您没有追踪bin-indices的奢侈。但是,我们可以通过构建第二个直方图来解决这个问题,第二个直方图与第一个直方图相同,并在分箱过程中跟踪索引。

    对于此方法,首先在某些直方图训练数据上使用hist()计算初始直方图。您需要存储的只是该训练数据的最小值和最大值。有了这些信息,我们可以使用linspace()histc()生成相同的直方图,调整额外的垃圾箱"错误" histc()给出了。

    这里的关键是处理异常数据。也就是说,新数据集中的数据超出了预先计算的直方图。因为应该为频率/概率分配0,我们只需要为预先计算的直方图添加一个额外的bin,其值为0,我们将任何未绑定的新数据指向映射到该索引。

    以下是第二种方法的注释,可运行代码:

    % PRE-COMPUTE A HISTOGRAM
    rng(2); % Seed the RNG for repeatability
    
    % Build some data
    numBins = 6;
    old_data = repmat(rand(1,5), 3, 1, 2);
    [rows, cols, dims] = size(old_data);
    
    % Store min and max of each histogram for reconstruction process
    min_val = min(old_data, [], 2);
    max_val = max(old_data, [], 2);
    
    % Just use hist() function while specifying number of bins this time
    % No need to track indices because we are going to be using this histogram
    % as a reference for looking up a different set of data
    h = zeros(dims, numBins);
    for k = 1 : dims
        tmp = old_data(:,:,k);
        tmp = tmp(:)';
        h(k,:) = hist(tmp, numBins);
    end
    h = h ./ repmat(sum(h, 2), 1, numBins); % Normalize histograms
    h(:, end + 1) = 0; % Map to here any data to that falls outside the pre-computed histogram
    
    % NEW DATA
    rng(3); % Seed RNG again for repeatability
    
    % Generate some new data
    new_data = repmat(rand(1,4), 4, 1, 2); % NOTE: Doesn't have to be same size
    [rows, cols, dims] = size(new_data);
    N = rows*cols;
    
    
    % Bin new data with histc() using boundaries from pre-computed histogram
    h_new = zeros(dims, numBins + 1);
    indices_new = zeros(dims, N);
    for k = 1 : dims
        tmp = new_data(:,:,k);
        tmp = tmp(:)';
    
        % Determine bins for new histogram with the same boundaries as
        % pre-computed one. This ensures that our resulting histograms are
        % identical, except for the "greater-than" bug which is accounted for
        % below.
        binEdges = linspace(min_val(k), max_val(k), numBins+1);
        [h_new(k,:), indices_new(k,:)] = histc(tmp, binEdges);
    end
    
    % Adjust for the "greater-than" bug
    % When adjusting this histogram, we are directing outliers that don't
    % fit into the pre-computed histogram to look up probabilities from that 
    % extra bin we added to the pre-computed histogram.
    h_new(:, numBins) = h_new(:, numBins) + h_new(:, end); % Add count in last bin to the second-to-last bin
    indices_new(indices_new == numBins + 1) = numBins; % Adjust indices accordingly
    indices_new(indices_new == 0) = numBins + 1; % Direct any unbinned data to the 0-probability last bin
    h_new = h_new ./ repmat(sum(h_new,2), 1, numBins + 1); % Normalize all histograms
    
    % Now we should have all of the new data binned into a histogram
    % that matches the pre-computed one. The catch is, we now have the indices
    % of the bins the new data was matched to. Thus, we can now use the same
    % efficient indexing-based look-up strategy as before to get probabilities
    % from the pre-computed histogram.
    for k = 1 : dims
        probs(:, :, k) = reshape(h(sub2ind(size(h), repmat(k, 1, size(indices_new, 2)), indices_new(k,:))), rows, cols);
    end
    probs