我正在解决扫描面部前置摄像头输入,检测面部并将其作为 UIImage 对象进行扫描的任务。 我正在使用 AVFoundation 来扫描和检测面孔。
像这样:
let input = try AVCaptureDeviceInput(device: captureDevice)
captureSession = AVCaptureSession()
captureSession!.addInput(input)
output = AVCaptureMetadataOutput()
captureSession?.addOutput(output)
output.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
output.metadataObjectTypes = [AVMetadataObjectTypeFace]
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
view.layer.addSublayer(videoPreviewLayer!)
captureSession?.startRunning()
在委托方法中 didOutputMetadataObjects 我正在面对 AVMetadataFaceObject 并使用红框高亮显示它:
let metadataObj = metadataObjects[0] as! AVMetadataFaceObject
let faceObject = videoPreviewLayer?.transformedMetadataObjectForMetadataObject(metadataObj)
faceFrame?.frame = faceObject!.bounds
问题是:如何以 UIImages 来获得面孔?
我试图跳过' didOutputSampleBuffer ',但根本没有调用它:c
答案 0 :(得分:3)
我使用didOutputSampleBuffer和Objective-C做了同样的事情。它看起来像:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(__bridge NSDictionary *)attachments];
if (attachments)
CFRelease(attachments);
NSNumber *orientation = (__bridge NSNumber *)(CMGetAttachment(imageDataSampleBuffer, kCGImagePropertyOrientation, NULL));
NSArray *features = [[CIDetector detectorOfType:CIDetectorTypeFace context:nil options:@{ CIDetectorAccuracy: CIDetectorAccuracyHigh }] featuresInImage:ciImage options:@{ CIDetectorImageOrientation: orientation }];
if (features.count == 1) {
CIFaceFeature *faceFeature = [features firstObject];
CGRect faceRect = faceFeature.bounds;
CGImageRef tempImage = [[CIContext contextWithOptions:nil] createCGImage:ciImage fromRect:ciImage.extent];
UIImage *image = [UIImage imageWithCGImage:tempImage scale:1.0 orientation:orientation.intValue];
UIImage *face = [image extractFace:faceRect];
}
}
其中extractFace是UIImage的扩展:
- (UIImage *)extractFace:(CGRect)rect {
rect = CGRectMake(rect.origin.x * self.scale,
rect.origin.y * self.scale,
rect.size.width * self.scale,
rect.size.height * self.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect(self.CGImage, rect);
UIImage *result = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:self.imageOrientation];
CGImageRelease(imageRef);
return result;
}
制作视频输出:
AVCaptureVideoDataOutput *videoOutput = [[AVCaptureVideoDataOutput alloc] init];
videoOutput.videoSettings = @{ (id)kCVPixelBufferPixelFormatTypeKey: [NSNumber numberWithInt:kCMPixelFormat_32BGRA] };
videoOutput.alwaysDiscardsLateVideoFrames = YES;
self.videoOutputQueue = dispatch_queue_create("OutputQueue", DISPATCH_QUEUE_SERIAL);
[videoOutput setSampleBufferDelegate:self queue:self.videoOutputQueue];
[self.session addOutput:videoOutput];
答案 1 :(得分:1)
- (UIImage *) screenshot {
CGSize size = CGSizeMake(faceFrame.frame.size.width, faceFrame.frame.size.height);
UIGraphicsBeginImageContextWithOptions(size, NO, [UIScreen mainScreen].scale);
CGRect rec = CGRectMake(faceFrame.frame.origin.x, faceFrame.frame.orogin.y, faceFrame.frame.size.width, faceFrame.frame.size.height);
[_viewController.view drawViewHierarchyInRect:rec afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
从上面获取一些提示
let contextImage: UIImage = <<screenshot>>!
let cropRect: CGRect = CGRectMake(x, y, width, height)
let imageRef: CGImageRef = CGImageCreateWithImageInRect(contextImage.CGImage, cropRect)
let image: UIImage = UIImage(CGImage: imageRef, scale: originalImage.scale, orientation: originalImage.imageOrientation)!
答案 2 :(得分:0)
我建议使用UIImagePickerController
类来实现自定义相机,以便为人脸检测选择多个图像。请检查Apple的示例代码PhotoPicker。
向某些人使用UIImagePickerController
将相机作为sourceType启动。并处理其委托imagePickerController:didFinishPickingMediaWithInfo:
以捕获图像+如果有帮助,您还可以检查takePicture
函数