我正在尝试创建一个Android应用程序,该程序可让您拍摄照片并在文本框中显示机器认为屏幕上显示的照片(使用图像识别模型完成)。按下“捕获”按钮后,它将缓冲区发送到模型中,模型将输出结果。但是,我目前无法正确发送缓冲区(我认为我正在发送黑屏,因为结果没有改变)。
在这里,我将图像保存到缓冲区中并适当调整大小(该模型仅需特定大小):
private void takePicture() {
if(cameraDevice == null)
return;
CameraManager manager = (CameraManager)getSystemService(Context.CAMERA_SERVICE);
try{
CameraCharacteristics characteristics = manager.getCameraCharacteristics(cameraDevice.getId());
Size[] jpegSizes = null;
if(characteristics != null)
jpegSizes = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)
.getOutputSizes(ImageFormat.JPEG);
//Capture image with custom size
int width = 640;
int height = 480;
if(jpegSizes != null && jpegSizes.length > 0)
{
width = jpegSizes[0].getWidth();
height = jpegSizes[0].getHeight();
}
final ImageReader reader = ImageReader.newInstance(width,height,ImageFormat.JPEG,1);
List<Surface> outputSurface = new ArrayList<>(2);
outputSurface.add(reader.getSurface());
outputSurface.add(new Surface(textureView.getSurfaceTexture()));
final CaptureRequest.Builder captureBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
captureBuilder.addTarget(reader.getSurface());
captureBuilder.set(CaptureRequest.CONTROL_MODE, CameraMetadata.CONTROL_MODE_AUTO);
//Check orientation base on device
int rotation = getWindowManager().getDefaultDisplay().getRotation();
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION,ORIENTATIONS.get(rotation));
file = new File(Environment.getExternalStorageDirectory()+"/"+UUID.randomUUID().toString()+".jpg");
//globalFile = file;
ImageReader.OnImageAvailableListener readerListener = new ImageReader.OnImageAvailableListener() {
@Override
public void onImageAvailable(ImageReader imageReader) {
Image image = null;
try{
image = reader.acquireLatestImage();
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
//CONVERTING TO CORRECT SIZE AND WHERE I SAVE THE BUFFER
mImageBuffer = buffer;
mImageBuffer = ByteBuffer.allocateDirect(
DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE);
mImageBuffer.order(ByteOrder.nativeOrder());
mImageBuffer.rewind();
//The issue might be here?
int pixel = 0;
for (int i = 0; i < DIM_IMG_SIZE_X; ++i) {
for (int j = 0; j < DIM_IMG_SIZE_Y; ++j) {
final int val = intValues[pixel++];
mImageBuffer.put((byte) ((val >> 16) & 0xFF));
mImageBuffer.put((byte) ((val >> 8) & 0xFF));
mImageBuffer.put((byte) (val & 0xFF));
}
}
//WHERE I CALL THE IMAGE RECOGNITION MODEL
runModelInference();
byte[] bytes = new byte[buffer.capacity()];
buffer.get(bytes);
save(bytes);
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
finally {
{
if(image != null)
image.close();
}
}
}
private void save(byte[] bytes) throws IOException {
OutputStream outputStream = null;
try{
outputStream = new FileOutputStream(file);
outputStream.write(bytes);
}finally {
if(outputStream != null)
outputStream.close();
}
}
};
reader.setOnImageAvailableListener(readerListener,mBackgroundHandler);
final CameraCaptureSession.CaptureCallback captureListener = new CameraCaptureSession.CaptureCallback() {
@Override
public void onCaptureCompleted(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, @NonNull TotalCaptureResult result) {
super.onCaptureCompleted(session, request, result);
Toast.makeText(MainActivity.this, "Saved "+file, Toast.LENGTH_SHORT).show();
createCameraPreview();
}
};
cameraDevice.createCaptureSession(outputSurface, new CameraCaptureSession.StateCallback() {
@Override
public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) {
try{
cameraCaptureSession.capture(captureBuilder.build(),captureListener,mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
@Override
public void onConfigureFailed(@NonNull CameraCaptureSession cameraCaptureSession) {
}
},mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
这是runModelInterence()
函数:
private void runModelInference() {
if (mInterpreter == null) {
Log.e(TAG, "Image classifier has not been initialized; Skipped.");
return;
}
try {
FirebaseModelInputs inputs = new FirebaseModelInputs.Builder().add(mImageBuffer).build();
// Here's where the magic happens!!
mInterpreter
.run(inputs, mDataOptions)
.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception e) {
e.printStackTrace();
showToast("Error running model inference");
}
})
.continueWith(
new Continuation<FirebaseModelOutputs, List<String>>() {
@Override
public List<String> then(Task<FirebaseModelOutputs> task) {
byte[][] labelProbArray = task.getResult()
.<byte[][]>getOutput(0);
List<String> topLabels = getTopLabels(labelProbArray);
TextView textLabels = findViewById(R.id.textLabels);
//WHERE I PRINT OUT LABELS
textLabels.setText(topLabels.get(0));
return topLabels;
}
});
} catch (FirebaseMLException e) {
e.printStackTrace();
showToast("Error running model inference");
}
}
我知道缓冲区不起作用,因为我运行了调试器,但是我不知道问题所在。请帮忙!我对创建应用程序非常陌生,而且我以前从未使用Java做过类似的事情,任何帮助将不胜感激!