我正在尝试部署Android TensorFlow-Lite示例,尤其是Detector Activity。
我在平板电脑上成功部署了它。该应用程序运行良好,能够检测对象,在其周围放置一个边界矩形,并带有标签和置信度。
然后,我安装了Raspberry Pi 3 Model B开发板,在其中安装了Android Things,并通过ADB进行了连接,然后从Android Studio部署了相同的程序。但是,我用于Rπ板的屏幕是空白的。
在检查Camera Demo For Android Things tutorial时,我想到了启用硬件加速以支持Camera Preview的想法。我添加了:
android:hardwareAccelerated="true"
在清单的application
标记中。
我还在application标签内添加了以下内容:
<uses-library android:name="com.google.android.things" />
在活动代码中添加一个意图过滤器:
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.IOT_LAUNCHER" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>
以便TensorFlow应用在启动后运行。
我再次部署了该应用程序,但仍然存在相同的错误-我无法配置预览屏幕会话。
以下是TensorFlow示例中包含的以下代码:
private void createCameraPreviewSession() {
try {
final SurfaceTexture texture = textureView.getSurfaceTexture();
assert texture != null;
// We configure the size of default buffer to be the size of camera preview we want.
texture.setDefaultBufferSize(previewSize.getWidth(), previewSize.getHeight());
// This is the output Surface we need to start preview.
final Surface surface = new Surface(texture);
// We set up a CaptureRequest.Builder with the output Surface.
previewRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewRequestBuilder.addTarget(surface);
LOGGER.e("Opening camera preview: " + previewSize.getWidth() + "x" + previewSize.getHeight());
// Create the reader for the preview frames.
previewReader =
ImageReader.newInstance(
previewSize.getWidth(), previewSize.getHeight(), ImageFormat.YUV_420_888, 2);
previewReader.setOnImageAvailableListener(imageListener, backgroundHandler);
previewRequestBuilder.addTarget(previewReader.getSurface());
// Here, we create a CameraCaptureSession for camera preview.
cameraDevice.createCaptureSession(
Arrays.asList(surface, previewReader.getSurface()),
new CameraCaptureSession.StateCallback() {
@Override
public void onConfigured(final CameraCaptureSession cameraCaptureSession) {
// The camera is already closed
if (null == cameraDevice) {
return;
}
// When the session is ready, we start displaying the preview.
captureSession = cameraCaptureSession;
try {
// Auto focus should be continuous for camera preview.
previewRequestBuilder.set(
CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
previewRequestBuilder.set(
CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
// Finally, we start displaying the camera preview.
previewRequest = previewRequestBuilder.build();
captureSession.setRepeatingRequest(
previewRequest, captureCallback, backgroundHandler);
} catch (final CameraAccessException e) {
LOGGER.e(e, "Exception!");
LOGGER.e("camera access exception!");
}
}
@Override
public void onConfigureFailed(final CameraCaptureSession cameraCaptureSession) {
showToast("Failed");
LOGGER.e("configure failed!!");
}
},
null);
} catch (final CameraAccessException e) {
LOGGER.e("camera access exception!");
LOGGER.e(e, "Exception!");
}
}
错误日志是onConfigureFailed
覆盖方法中的错误日志,导致该语句的相关错误日志为:
11-12 14:02:40.677 1991-2035/org.tensorflow.demo E/CameraCaptureSession: Session 0: Failed to create capture session; configuration failed
11-12 14:02:40.679 1991-2035/org.tensorflow.demo E/tensorflow: CameraConnectionFragment: configure failed!!
但是,我无法跟踪Session 0:
堆栈跟踪。
除了打开硬件加速并将其他几个标签添加到清单之外,我还没有尝试任何操作。
我已经进行了研究,还看到了其他示例,但是它们仅在单击按钮时拍摄照片。我需要一个工作正常的相机预览。
我也有那个CameraDemoForAndroidThings示例,但恐怕我不了解Kotlin能够猜测其工作原理。
如果有人设法使Java版本的TensorFlow Detection Activity在Raspberry Pi Android Things上运行,请贡献力量,让我们知道您的操作方式。
更新:
显然,相机一次只能支持一个流配置。我还可以推断出我必须修改createCaptureSession()
函数以仅使用一个表面,我的函数现在看起来像这样:
cameraDevice.createCaptureSession(
// Arrays.asList(surface, previewReader.getSurface()),
Arrays.asList(surface),
new CameraCaptureSession.StateCallback() {
@Override
public void onConfigured(final CameraCaptureSession cameraCaptureSession) {
// The camera is already closed
if (null == cameraDevice) {
return;
}
// When the session is ready, we start displaying the preview.
captureSession = cameraCaptureSession;
try {
// Auto focus should be continuous for camera preview.
// previewRequestBuilder.set(
// CaptureRequest.CONTROL_AF_MODE,
// CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
// previewRequestBuilder.set(
// CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
// Finally, we start displaying the camera preview.
previewRequest = previewRequestBuilder.build();
captureSession.setRepeatingRequest(
previewRequest, captureCallback, backgroundHandler);
previewRequestBuilder.addTarget(previewReader.getSurface());
} catch (final CameraAccessException e) {
LOGGER.e("exception hit while configuring camera!");
LOGGER.e(e, "Exception!");
}
}
@Override
public void onConfigureFailed(final CameraCaptureSession cameraCaptureSession) {
LOGGER.e("Configure failed!");
showToast("Failed");
}
},
null);
这使我可以获得实时预览。但是,该代码不会继续将图像从预览发送到processImage()
块。
有没有人成功地实现了将TensorFlow-Lite示例涉及到Android Things的实时摄像机预览?