我正在实施google-vision face tracker。
我已经看到一些stackoverflow实现(here)来创建面部位图,这是我的结果:
人脸检测和位图
class MyFaceDetector extends Detector<Face> {
private Detector<Face> mDelegate;
MyFaceDetector(Detector<Face> delegate) {
mDelegate = delegate;
}
public SparseArray<Face> detect(Frame frame) {
YuvImage yuvImage = new YuvImage(frame.getGrayscaleImageData().array(), ImageFormat.NV21, frame.getMetadata().getWidth(), frame.getMetadata().getHeight(), null); // Create YUV image from byte[]
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
yuvImage.compressToJpeg(new Rect(0, 0, frame.getMetadata().getWidth(), frame.getMetadata().getHeight()), 100, byteArrayOutputStream);// Convert YUV image to Jpeg
byte[] jpegArray = byteArrayOutputStream.toByteArray();
Bitmap bmp = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length); // Convert Jpeg to Bitmap
Frame outputbmp =new Frame.Builder().setBitmap(bmp).setRotation(Frame.ROTATION_270).build();
//part of image processing
//...
//part of image processing
double heartRateFrequency = Fft.FFT(arrayGreen, heartRateFrameLength, finalSamplingFrequency);
double battitialminuto=(int)ceil(heartRateFrequency*60);
double heartRate1Frequency = Fft.FFT(arrayRed, heartRateFrameLength, finalSamplingFrequency);
double breath1=(int)ceil(heartRate1Frequency*60);
if((battitialminuto > 10 || battitialminuto < 24) )
{
if((breath1 > 10 || breath1 < 24)) {
bufferAvgBr = (battitialminuto+breath1)/2;
}
else{
bufferAvgBr = battitialminuto;
}
}
else if((breath1 > 10 || breath1 < 24)){
bufferAvgBr = breath1;
}
Breath=(int)bufferAvgBr;
}
else {
// do nothing
}
return mDelegate.detect(outputbmp);
}
public boolean isOperational() {
return mDelegate.isOperational();
}
public boolean setFocus(int id) {
return mDelegate.setFocus(id);
}
}
这里有FaceTrackerActivity类:
private void createCameraSource() {
Context context = getApplicationContext();
detector = new FaceDetector.Builder(context)
.setClassificationType(FaceDetector.ALL_CLASSIFICATIONS)
.setMode(FaceDetector.ACCURATE_MODE)
.setLandmarkType(FaceDetector.ALL_LANDMARKS)
.build();
MyFaceDetector myFaceDetector = new MyFaceDetector(detector);
detector.setProcessor(
new MultiProcessor.Builder<>(new GraphicFaceTrackerFactory())
.build());
if (!detector.isOperational()) {
Log.w(TAG, "Face detector dependencies are not yet available.");
}
mCameraSource = new CameraSource.Builder(context, detector)
.setRequestedPreviewSize(640, 480)
.setFacing(CameraSource.CAMERA_FACING_FRONT)
.setRequestedFps(30.0f)
.build();
这是我要放置MyFaceDetection类产生的呼吸的地方:
@SuppressLint("ClickableViewAccessibility")
@Override
public void onCreate(Bundle icicle) {
super.onCreate(icicle);
setContentView(R.layout.main);
mPreview = (CameraSourcePreview) findViewById(R.id.preview);
mGraphicOverlay = (GraphicOverlay) findViewById(R.id.faceOverlay);
info3 = (TextView) findViewById(R.id.info3);
displayInfo = (Button) findViewById(R.id.display);
displayInfo.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
info3.setText("result"+ Breath);
});
我是一个初学者,我不明白为什么没有结果。我不确定在计算位图图像或调出“呼吸”结果时会犯错误,但是编译时没有错误,所以我找不到问题。有人可以指引我走正确的路吗?