我正在尝试用Java实现Floyd Steinberg算法,使用java.awt.image.BufferedImage。
我使用了here所描述的算法 使用自定义调色板,我希望获得或多或少与维基百科示例中相同的图像(或者由Gimp生成的图像),但是我的版本非常不同。
你可以看到我得到的东西
我显然缺少一些东西(输出图像的颜色不属于我的调色板),但我无法弄清楚是什么。
我做错了什么?
这是代码:
import javax.imageio.ImageIO;
import java.awt.*;
import java.awt.image.BufferedImage;
import java.awt.image.IndexColorModel;
import java.io.File;
import java.io.IOException;
public class FloydSteinbergTest {
private static final Color[] PALETTE = new Color[]{
new Color(221, 221, 221),
new Color(19, 125, 62),
new Color(179, 80, 188),
new Color(107, 138, 201),
new Color(177, 166, 39),
new Color(65, 174, 56),
new Color(208, 132, 153),
new Color(64, 64, 64),
new Color(154, 161, 161),
new Color(46, 110, 137),
new Color(126, 61, 181),
new Color(46, 56, 141),
new Color(79, 50, 31),
new Color(53, 70, 27),
new Color(150, 52, 48),
new Color(25, 22, 22)};
public static void main(String[] args) {
String lImgFile = "/tmp/test.jpg";
try {
// Load image
BufferedImage lImage = ImageIO.read(new File(lImgFile));
BufferedImage lOutImage = applyDitheredPalette(lImage, PALETTE);
ImageIO.write(lOutImage, "png", new File("/tmp/out.png"));
} catch (IOException lEx) {
System.out.println(lEx.getMessage());
}
}
/**
* @param pPalette Color palette to apply.
* @param pImage Image to apply palette on.
* @return {@link java.awt.image.BufferedImage} corresponding to pPalette applied on pImage using naive Floyd-Steinberg implementation
*/
public static BufferedImage applyDitheredPalette(BufferedImage pImage, Color[] pPalette) {
int lWidth = pImage.getWidth();
int lHeight = pImage.getHeight();
IndexColorModel lColorModel = paletteToColorModel(pPalette);
BufferedImage lImageOut = new BufferedImage(lWidth, lHeight, BufferedImage.TYPE_BYTE_INDEXED, lColorModel);
for (int y = (lHeight - 1); y >= 0; y--) {
for (int x = 0; x < lWidth; x++) {
// Get original pixel color channels
int lInitialPixelColor = pImage.getRGB(x, y);
// Finding nearest color in the palette
Color lNearestColor = getNearestColor(lInitialPixelColor, pPalette);
// Set quantized pixel
lImageOut.setRGB(x, y, lNearestColor.getRGB());
// Applying Floyd-Steinberg dithering
int quantizationError = lInitialPixelColor - lNearestColor.getRGB();
if ((x + 1) < lWidth) {
int lPixel = pImage.getRGB(x + 1, y);
lImageOut.setRGB(x + 1, y, lPixel + (quantizationError * (7 / 16)));
}
if ((x - 1) > 0 && (y + 1) < lHeight) {
int lPixel = pImage.getRGB(x - 1, y + 1);
lImageOut.setRGB(x - 1, y + 1, lPixel + (quantizationError * (3 / 16)));
}
if ((y + 1) < lHeight) {
int lPixel = pImage.getRGB(x, y + 1);
lImageOut.setRGB(x, y + 1, lPixel + (quantizationError * (5 / 16)));
}
if ((x + 1 < lWidth) && (y + 1 < lHeight)) {
int lPixel = pImage.getRGB(x + 1, y + 1);
lImageOut.setRGB(x + 1, y + 1, lPixel + (quantizationError * (1 / 16)));
}
// End of Floyd-Steinberg dithering
}
}
return lImageOut;
}
/**
* @param pPalette to load color model from
* @return {@link java.awt.image.IndexColorModel} Color model initialized using pPalette colors
*/
private static IndexColorModel paletteToColorModel(Color[] pPalette) {
int lSize = pPalette.length;
// Getting color component for each palette color
byte[] lReds = new byte[lSize];
byte[] lGreens = new byte[lSize];
byte[] lBlues = new byte[lSize];
for (int i = 0; i < lSize; i++) {
Color lColor = pPalette[i];
lReds[i] = (byte) lColor.getRed();
lGreens[i] = (byte) lColor.getGreen();
lBlues[i] = (byte) lColor.getBlue();
}
return new IndexColorModel(4, lSize, lReds, lGreens, lBlues);
}
/**
* @param pColor Color to approximate
* @param pPalette Color palette to use for quantization
* @return {@link java.awt.Color} nearest from pColor value took in pPalette
*/
private static Color getNearestColor(int pColor, Color[] pPalette) {
Color lNearestColor = null;
double lNearestDistance = Integer.MAX_VALUE;
double lTempDist;
for (Color lColor : pPalette) {
Color lRgb = new Color(pColor);
lTempDist = distance(lRgb.getRed(), lRgb.getGreen(), lRgb.getBlue(), lColor.getRed(), lColor.getGreen(), lColor.getBlue());
if (lTempDist < lNearestDistance) {
lNearestDistance = lTempDist;
lNearestColor = lColor;
}
}
return lNearestColor;
}
/**
* @return Distance between 2 pixels color channels.
*/
private static double distance(int pR1, int pG1, int pB1, int pR2, int pG2, int pB2) {
double lDist = Math.pow(pR1 - pR2, 2) + Math.pow(pG1 - pG2, 2) + Math.pow(pB1 - pB2, 2);
return Math.sqrt(lDist);
}}
答案 0 :(得分:5)
此网站仅供提问,不适用于调试。但是,至少尝试回答问题“我做错了什么?”:
(7 / 16)
将执行整数除法,结果将为0
。请改用(7.0 / 16.0)
0x000000FF
(蓝色)的RGB值,并将其乘以256,则结果将为0x0000FF00
(绿色)。像lPixel + (quantizationError * (3.0 / 16.0)
这样的计算必须单独分别完成R,G和B通道您正在从底部到顶部处理图像。然后在右下方像素之间分配错误(如维基百科网站上所述)不再有意义。从
更改循环for (int y = (lHeight - 1); y >= 0; y--)
到
for (int y = 0; y < lHeight; y++)
您无法将量化误差直接存储在BufferedImage
的像素中,因为错误也可能是否定。图像无法处理此问题。 (我也怀疑你的颜色模型,但这只是一种直觉)
您描述为“预期结果”的图片包含调色板中明确不的颜色。