使用Swift Vision API进行低功耗人脸检测

时间:2019-04-26 09:49:46

标签: ios swift face-detection vision-api

我有一个带有屏幕保护程序的iOS应用。我想跟踪用户的面部,以便当用户查看设备时屏幕保护程序自动消失。

我已经建立了一个使用Vision API的测试应用,并且我从一些code from Ray Wenderlich开始。我已经修改了代码,以便根据是否有人在设备上看而打印出正面/不正面。但是,我注意到CPU消耗确实很高,并且几分钟后设备变得有点热。有什么办法可以降低人脸检测的刷新率,或者是否有其他解决方案可以降低功耗呢?

这是我修改的代码:

import AVFoundation
import UIKit
import Vision

class FaceDetectionViewController: UIViewController {
  var sequenceHandler = VNSequenceRequestHandler()


  let session = AVCaptureSession()
  var previewLayer: AVCaptureVideoPreviewLayer!

  let dataOutputQueue = DispatchQueue(
    label: "video data queue",
    qos: .userInitiated,
    attributes: [],
    autoreleaseFrequency: .workItem)

  override func viewDidLoad() {
    super.viewDidLoad()
    configureCaptureSession()

    session.startRunning()
  }
}

// MARK: - Video Processing methods

extension FaceDetectionViewController {
  func configureCaptureSession() {
    // Define the capture device we want to use
    guard let camera = AVCaptureDevice.default(.builtInWideAngleCamera,
                                               for: .video,
                                               position: .front) else {
      fatalError("No front video camera available")
    }

    // Connect the camera to the capture session input
    do {
      let cameraInput = try AVCaptureDeviceInput(device: camera)
      session.addInput(cameraInput)
    } catch {
      fatalError(error.localizedDescription)
    }

    // Create the video data output
    let videoOutput = AVCaptureVideoDataOutput()
    videoOutput.setSampleBufferDelegate(self, queue: dataOutputQueue)
    videoOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA]

    // Add the video output to the capture session
    session.addOutput(videoOutput)

    let videoConnection = videoOutput.connection(with: .video)
    videoConnection?.videoOrientation = .portrait

  }
}

extension FaceDetectionViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
  func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
    // 1
    guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
      return
    }

    let detectFaceRequest = VNDetectFaceLandmarksRequest(completionHandler: detectedFace)

    do {
      try sequenceHandler.perform(
        [detectFaceRequest],
        on: imageBuffer,
        orientation: .leftMirrored)
    } catch {
      print(error.localizedDescription)
    }
  }
}

extension FaceDetectionViewController {

  func detectedFace(request: VNRequest, error: Error?) {
    // 1
    guard
      let results = request.results as? [VNFaceObservation],
      let result = results.first
      else {
        print("** NO FACE")
        return
    }
    print("FACE")
  }
}

有什么方法可以以更节能的方式使用Vision API?我不需要快速刷新率。一检测到5秒就足够了。

0 个答案:

没有答案