我正在尝试使用Google Mobile Vision API来检测用户何时从相机Feed中微笑。我遇到的问题是Google移动视觉API没有检测到任何面孔,而苹果的视觉API立即识别并跟踪我测试我的应用程序的任何面孔。我正在使用func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) { }
来检测用户何时微笑。 Apple的Vision API似乎运行良好,但谷歌的API没有检测到任何面孔。我如何修复我的代码以使Google的API也能正常运行?我做错了什么?
我的代码......
var options = [GMVDetectorFaceTrackingEnabled: true, GMVDetectorFaceLandmarkType: GMVDetectorFaceLandmark.all.rawValue, GMVDetectorFaceMinSize: 0.15] as [String : Any]
var GfaceDetector = GMVDetector.init(ofType: GMVDetectorTypeFace, options: options)
extension ViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate)
let ciImage1 = CIImage(cvImageBuffer: pixelBuffer!, options: attachments as! [String : Any]?)
let Gimage = UIImage(ciImage: ciImage1)
var Gfaces = GfaceDetector?.features(in: Gimage, options: nil) as? [GMVFaceFeature]
let options: [String : Any] = [CIDetectorImageOrientation: exifOrientation(orientation: UIDevice.current.orientation),
CIDetectorSmile: true,
CIDetectorEyeBlink: true]
let allFeatures = faceDetector?.features(in: ciImage1, options: options)
let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer)
let cleanAperture = CMVideoFormatDescriptionGetCleanAperture(formatDescription!, false)
var smilingProb = CGFloat()
guard let features = allFeatures else { return }
print("GFace \(Gfaces?.count)")
//THE PRINT ABOVE RETURNS 0
//MARK: ------ Google System Setup
for face: GMVFaceFeature in Gfaces! {
print("Google1")
if face.hasSmilingProbability {
print("Google \(face.smilingProbability)")
smilingProb = face.smilingProbability
}
}
for feature in features {
if let faceFeature = feature as? CIFaceFeature {
let faceRect = calculateFaceRect(facePosition: faceFeature.mouthPosition, faceBounds: faceFeature.bounds, clearAperture: cleanAperture)
let featureDetails = ["has smile: \(faceFeature.hasSmile), \(smilingProb)",
"has closed left eye: \(faceFeature.leftEyeClosed)",
"has closed right eye: \(faceFeature.rightEyeClosed)"]
update(with: faceRect, text: featureDetails.joined(separator: "\n"))
}
}
if features.count == 0 {
DispatchQueue.main.async {
self.detailsView.alpha = 0.0
}
}
}
更新
我将谷歌移动视觉检测代码复制并粘贴到另一个应用程序中,但它确实有效。不同之处在于,应用程序只有一个图像需要分析,而不是不断接收帧。这可能与我发送请求的频率或CIImage的格式/质量有关吗?
另一个更新
我发现了我的应用如何运作的问题。似乎API正在接收的图像不是直立的或与电话的方向一致。例如,如果我将手机放在我的面前(在正常人像模式下),图像将逆时针旋转90度。由于实时相机预览正常,我完全不知道为什么会这样。 Google Docs说...
人脸检测器要求图像及其中的面部处于直立方向。如果需要旋转图像,请使用GMVDetectorImageOrientation键在字典选项中传入方向信息。探测器将根据方向值为您旋转图像。
新问题:(我相信其中任何一个问题的答案都可以解决我的问题)
答:我如何使用GMVImageDetectorImageOrientation键来设置方向?
B:如何将UIImage顺时针旋转90度(不是UIIMAGEVIEW)?
第三次更新
我已经成功地将图像正面向上旋转,但Google Mobile Vision仍未检测到任何脸部,图像有点扭曲,但我认为失真的数量不会影响Google Mobile Vision的响应。所以......
如何使用GMVImageDetectorImageOrientation键设置方向?
任何帮助/回应都得到了回应。