我尝试使用camera2 api实现相机预览图像数据处理,如下所示:Camera preview image data processing with Android L and Camera2 API。
我使用onImageAvailableListener成功接收回调,但是为了将来的处理,我需要从YUV_420_888 android.media.Image获取位图。我搜索了类似的问题,但没有一个帮助过。
你能否建议我如何将android.media.Image(YUV_420_888)转换为Bitmap,或者有更好的方法来监听预览帧?
答案 0 :(得分:6)
有关更简单的解决方案,请参阅此处的实施:
Conversion YUV 420_888 to Bitmap (full code)
该函数将media.image作为输入,并根据y,u和v平面创建三个RenderScript分配。它遵循YUV_420_888逻辑,如此Wikipedia插图中所示。
但是,这里我们有三个独立的Y,U和V通道图像平面,因此我把它们作为三个字节[],即U8分配。 y分配具有大小宽度*高度字节,而u和v分配每个具有大小宽度*高度/ 4个字节,反映了每个u字节覆盖4个像素(每个v字节同样)的事实。
答案 1 :(得分:3)
我写了一些关于这个的代码,它是YUV数据预览并将其转换为JPEG数据,我可以用它来保存为位图,byte []或其他。(你可以看到类“Allocation”) 。
SDK文档说:“使用android.renderscript进行有效的YUV处理:使用支持的YUV类型创建RenderScript分配,IO_INPUT标志,以及getOutputSizes(Allocation.class)返回的大小之一,然后获取使用getSurface()进行表面处理。“
这是代码,希望它能为您提供帮助:https://github.com/pinguo-yuyidong/Camera2/blob/master/camera2/src/main/rs/yuv2rgb.rs
答案 2 :(得分:1)
您可以使用内置的Renderscript内在函数ScriptIntrinsicYuvToRGB
来执行此操作。来自Camera2 api Imageformat.yuv_420_888 results on rotated image的代码:
@Override public void onImageAvailable(ImageReader reader) { // Get the YUV data final Image image = reader.acquireLatestImage(); final ByteBuffer yuvBytes = this.imageToByteBuffer(image); // Convert YUV to RGB final RenderScript rs = RenderScript.create(this.mContext); final Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888); final Allocation allocationRgb = Allocation.createFromBitmap(rs, bitmap); final Allocation allocationYuv = Allocation.createSized(rs, Element.U8(rs), yuvBytes.array().length); allocationYuv.copyFrom(yuvBytes.array()); ScriptIntrinsicYuvToRGB scriptYuvToRgb = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs)); scriptYuvToRgb.setInput(allocationYuv); scriptYuvToRgb.forEach(allocationRgb); allocationRgb.copyTo(bitmap); // Release bitmap.recycle(); allocationYuv.destroy(); allocationRgb.destroy(); rs.destroy(); image.close(); } private ByteBuffer imageToByteBuffer(final Image image) { final Rect crop = image.getCropRect(); final int width = crop.width(); final int height = crop.height(); final Image.Plane[] planes = image.getPlanes(); final byte[] rowData = new byte[planes[0].getRowStride()]; final int bufferSize = width * height * ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8; final ByteBuffer output = ByteBuffer.allocateDirect(bufferSize); int channelOffset = 0; int outputStride = 0; for (int planeIndex = 0; planeIndex < 3; planeIndex++) { if (planeIndex == 0) { channelOffset = 0; outputStride = 1; } else if (planeIndex == 1) { channelOffset = width * height + 1; outputStride = 2; } else if (planeIndex == 2) { channelOffset = width * height; outputStride = 2; } final ByteBuffer buffer = planes[planeIndex].getBuffer(); final int rowStride = planes[planeIndex].getRowStride(); final int pixelStride = planes[planeIndex].getPixelStride(); final int shift = (planeIndex == 0) ? 0 : 1; final int widthShifted = width >> shift; final int heightShifted = height >> shift; buffer.position(rowStride * (crop.top >> shift) + pixelStride * (crop.left >> shift)); for (int row = 0; row < heightShifted; row++) { final int length; if (pixelStride == 1 && outputStride == 1) { length = widthShifted; buffer.get(output.array(), channelOffset, length); channelOffset += length; } else { length = (widthShifted - 1) * pixelStride + 1; buffer.get(rowData, 0, length); for (int col = 0; col < widthShifted; col++) { output.array()[channelOffset] = rowData[col * pixelStride]; channelOffset += outputStride; } } if (row < heightShifted - 1) { buffer.position(buffer.position() + rowStride - length); } } } return output; }