我正在尝试使用自定义模型将CameraX与MLKit集成。我面临的问题是尝试使用从ImageAnalysis回调中获取的ProxyImage并在MLKit中使用该数据。以前使用Camera2,我将操纵SurfaceView上的位图。我可以通过创建FirebaseVisionImage来获取位图,但我想避免在收到的每一帧上操纵位图。这非常昂贵。
这在logcat中可见:ML Kit has detected that you seem to pass camera frames to the detector as a Bitmap object. This is inefficient. Please use YUV_420_888 format for camera2 API or NV21 format for (legacy) camera API and directly pass down the byte array to ML Kit
我需要从ImageAnalysis中获取图像数据,并获取相机预览所捕获内容的中间部分(我们正在使用全屏相机预览),因为我们的自定义模型需要接受方形图像。这是我目前拥有的:
val preview = Preview.Builder().apply {
setTargetResolution(screenSize)
setTargetRotation(viewFinder.display.rotation)
}.build()
preview.previewSurfaceProvider = viewFinder.previewSurfaceProvider
val imageAnalysis = ImageAnalysis.Builder()
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.setTargetAspectRatio(AspectRatio.RATIO_16_9)
.build()
imageAnalysis.setAnalyzer(executor,
ImageAnalysis.Analyzer {imageProxy ->
\\DO image manipulation here
})
val cameraSelector = CameraSelector.Builder()
.requireLensFacing(CameraSelector.LENS_FACING_BACK)
.build()
cameraProviderFuture.addListener(Runnable {
val cameraProvider = cameraProviderFuture.get()
cameraProvider.bindToLifecycle(
requireActivity(),
cameraSelector,
preview,
imageAnalysis
)
}, executor)