image.getRaster()。getDataBuffer()返回负值数组

时间:2015-03-24 07:02:22

标签: java bufferedimage

answer suggests that it's over 10 times faster to loop pixel array而不是使用BufferedImage.getRGB。在计算机视觉计划中忽略这种差异太重要了。出于这个原因,O重写了我的IntegralImage方法,使用像素数组计算积分图像:

/* Generate an integral image. Every pixel on such image contains sum of colors or all the
     pixels before and itself.
  */
  public static double[][][] integralImage(BufferedImage image) {
    //Cache width and height in variables
    int w = image.getWidth();
    int h = image.getHeight();
    //Create the 2D array as large as the image is
    //Notice that I use [Y, X] coordinates to comply with the formula
    double integral_image[][][] = new double[h][w][3];

    //Variables for the image pixel array looping
    final int[] pixels = ((DataBufferInt) image.getRaster().getDataBuffer()).getData();
    //final byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
    //If the image has alpha, there will be 4 elements per pixel
    final boolean hasAlpha = image.getAlphaRaster() != null;
    final int pixel_size = hasAlpha?4:3;
    //If there's alpha it's the first of 4 values, so we skip it
    final int pixel_offset = hasAlpha?1:0;
    //Coordinates, will be iterated too
    //It's faster than calculating them using % and multiplication
    int x=0;
    int y=0;

    int pixel = 0;
    //Tmp storage for color
    int[] color = new int[3];
    //Loop through pixel array
    for(int i=0, l=pixels.length; i<l; i+=pixel_size) {
      //Prepare all the colors in advance
      color[2] = ((int) pixels[pixel + pixel_offset] & 0xff); // blue;
      color[1] = ((int) pixels[pixel + pixel_offset + 1] & 0xff); // green;
      color[0] = ((int) pixels[pixel + pixel_offset + 2] & 0xff); // red;
      //For every color, calculate the integrals
      for(int j=0; j<3; j++) {
        //Calculate the integral image field
        double A = (x > 0 && y > 0) ? integral_image[y-1][x-1][j] : 0;
        double B = (x > 0) ? integral_image[y][x-1][j] : 0;
        double C = (y > 0) ? integral_image[y-1][x][j] : 0;
        integral_image[y][x][j] = - A + B + C + color[j];
      }
      //Iterate coordinates
      x++;
      if(x>=w) {
        x=0;
        y++;        
      }
    }
    //Return the array
    return integral_image;
  }

问题是如果我在for循环中使用此调试输出:

  if(x==0) {
    System.out.println("rgb["+pixels[pixel+pixel_offset+2]+", "+pixels[pixel+pixel_offset+1]+", "+pixels[pixel+pixel_offset]+"]");
    System.out.println("rgb["+color[0]+", "+color[1]+", "+color[2]+"]");
  }

这就是我得到的:

rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
...

那么我应该如何正确检索BufferedImage图像的像素数组呢?

1 个答案:

答案 0 :(得分:2)

上面代码中容易遗漏的一个错误是,for循环不会像您期望的那样循环。 for循环更新i,而循环体使用pixel进行数组索引。因此,您只能看到像素1,2和3的值。


除此之外:

&#34;问题&#34;对于负像素值,代码最有可能假定BufferedImage将像素存储在&#34;像素交错&#34;但是,它们存储了像素包装&#34;。也就是说,一个像素的所有样本(R,G,B和A)存储在单个样本中,即int。所有BufferedImage.TYPE_INT_*类型都是这种情况(BufferedImage.TYPE_nBYTE_*类型是交错存储的)。

在光栅中具有负值是完全正常的,对于透明度小于50%(大于或等于50%不透明)的任何像素都会发生这种情况,因为4个样本的打包方式如何进入int,因为int是Java中的签名类型。

在这种情况下,请使用:

int[] color = new int[3];

for (int i = 0; i < pixels.length; i++) {
    // Assuming TYPE_INT_RGB, TYPE_INT_ARGB or TYPE_INT_ARGB_PRE
    // For TYPE_INT_BGR, you need to reverse the colors.

    // You seem to ignore alpha, is that correct?
    color[0] = ((pixels[i] >> 16) & 0xff); // red;
    color[1] = ((pixels[i] >>  8) & 0xff); // green;
    color[2] = ( pixels[i]        & 0xff); // blue;

    // The rest of the computations...
}

另一种可能性是,您创建了一个自定义类型图像(BufferedImage.TYPE_CUSTOM),每个样本确实使用32位无符号整数。这是可能的,但是,int仍然是Java中的签名实体,因此您需要屏蔽符号位。要在Java -1 & 0xFFFFFFFF == -1中稍微复杂一点,因为int上的任何计算仍然是int,除非您另有明确说明(在byte上执行相同操作或short值将按比例放大&#34;到int)。要获得正值,您需要使用long这样的值:-1 & 0xFFFFFFFFL4294967295)。

在这种情况下,请使用:

long[] color = new long[3];

for(int i = 0; i < pixels.length / pixel_size; i += pixel_size) {
    // Somehow assuming BGR order in input, and RGB output (color)
    // Still ignoring alpha
    color[0] = (pixels[i + pixel_offset + 2] & 0xFFFFFFFFL); // red;
    color[1] = (pixels[i + pixel_offset + 1] & 0xFFFFFFFFL); // green;
    color[2] = (pixels[i + pixel_offset    ] & 0xFFFFFFFFL); // blue;

    // The rest of the computations...
}

我不知道你有什么类型的形象,所以我无法确定哪一个是问题,但它是其中之一。 : - )

PS:BufferedImage.getAlphaRaster()可能是一种昂贵且不准确的方式来判断图像是否具有alpha。最好只使用image.getColorModel().hasAlpha()。另请参阅hasAlpha vs getAlphaRaster