我一直在尝试将滤镜应用于图像中检测到的面部的特定部分。
为了将滤镜应用于整个图像,我使用了苹果公司的示例代码:https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/avcamfilter_applying_filters_to_a_capture_stream
如果我仅通过CIDetector添加一行检测到的人脸,该方法将CVPixelBuffers发送给FilterRenderer类,然后发送给MTKView来呈现过滤后的缓冲区,则性能将大大下降。
链条如下所示:CMSampleBuffer> CVImageBuffer> CIImage> 检测面>仅将滤镜应用于面>从FilterRenderer取回CVPixelBuffer>将其发送到MTKView
“ 检测人脸”部分是如此之慢,我无法想象如果我再做一些处理(找到眼睛和嘴巴的位置),会多么慢。
您可以在此处查看示例实现:https://github.com/nipun0505/FaceDetectionMetal
func processVideo(sampleBuffer: CMSampleBuffer) {
if !renderingEnabled {
return
}
guard let videoPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer),
let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer) else {
return
}
var finalVideoPixelBuffer = videoPixelBuffer
if let filter = videoFilter {
if !filter.isPrepared {
filter.prepare(with: formatDescription, outputRetainedBufferCountHint: 3)
}
//Detect faces
if let faceDetector = faceDetector{
let features = faceDetector.features(in: CIImage(cvImageBuffer: videoPixelBuffer))
}
// Send the pixel buffer through the filter
guard let filteredBuffer = filter.render(pixelBuffer: finalVideoPixelBuffer) else {
print("Unable to filter video buffer")
return
}
finalVideoPixelBuffer = filteredBuffer
}
previewView.pixelBuffer = finalVideoPixelBuffer
}
我在这里做错什么了吗?在没有MTKView的情况下,我尝试了相同的方法,即我只是检测人脸并在AVCaptureVideoPreviewLayer
上覆盖了一些图像,效果非常流畅。不知道是什么因素降低了我的表现。