如何从灰度字节缓冲区图像创建位图?

时间:2015-09-05 10:49:57

标签: android bitmap grayscale google-vision android-vision

我正在尝试使用新的Android脸部检测移动视觉API来处理帧图像。

所以我创建了自定义检测器以获取Frame并尝试调用getBitmap()方法,但它为null,因此我访问了帧的灰度数据。有没有办法从它创建位图或类似的图像持有者类?

public class CustomFaceDetector extends Detector<Face> {
private Detector<Face> mDelegate;

public CustomFaceDetector(Detector<Face> delegate) {
    mDelegate = delegate;
}

public SparseArray<Face> detect(Frame frame) {
    ByteBuffer byteBuffer = frame.getGrayscaleImageData();
    byte[] bytes = byteBuffer.array();
    int w = frame.getMetadata().getWidth();
    int h = frame.getMetadata().getHeight();
    // Byte array to Bitmap here
    return mDelegate.detect(frame);
}

public boolean isOperational() {
    return mDelegate.isOperational();
}

public boolean setFocus(int id) {
    return mDelegate.setFocus(id);
}}

2 个答案:

答案 0 :(得分:12)

您可能已经对此进行了整理,但如果将来有人偶然发现这个问题,我的解决方法就是这样:

正如@ pm0733464所指出的那样,来自android.hardware.Camera的默认图像格式是NV21,这是CameraSource使用的格式。

This stackoverflow答案提供了答案:

YuvImage yuvimage=new YuvImage(byteBuffer, ImageFormat.NV21, w, h, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(new Rect(0, 0, w, h), 100, baos); // Where 100 is the quality of the generated jpeg
byte[] jpegArray = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);

虽然frame.getGrayscaleImageData()建议bitmap将是原始图片的灰度版本,但根据我的经验,情况并非如此。实际上,位图与本地提供给SurfaceHolder的位图相同。

答案 1 :(得分:0)

添加一些额外内容,为检测区域设置每侧300px的盒子。顺便说一句,如果你没有从元数据中输入getGrayscaleImageData()中的帧高和宽度,你会得到奇怪的损坏的位图。

public SparseArray<Barcode> detect(Frame frame) {
        // *** crop the frame here
        int boxx = 300;
        int width = frame.getMetadata().getWidth();
        int height = frame.getMetadata().getHeight();
        int ay = (width/2) + (boxx/2);
        int by = (width/2) - (boxx/2);
        int ax = (height/2) + (boxx/2);
        int bx = (height/2) - (boxx/2);

        YuvImage yuvimage=new YuvImage(frame.getGrayscaleImageData().array(), ImageFormat.NV21, frame.getMetadata().getWidth(), frame.getMetadata().getHeight(), null);
        ByteArrayOutputStream baos = new ByteArrayOutputStream();
        yuvimage.compressToJpeg(new Rect(by, bx, ay, ax), 100, baos); // Where 100 is the quality of the generated jpeg
        byte[] jpegArray = baos.toByteArray();
        Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);

        Frame outputFrame = new Frame.Builder().setBitmap(bitmap).build();
        return mDelegate.detect(outputFrame);
    }

    public boolean isOperational() {
        return mDelegate.isOperational();
    }

    public boolean setFocus(int id) {
        return mDelegate.setFocus(id);
    }
}