用边界捕捉图像?

时间:2012-12-31 22:41:16

标签: ios objective-c camera avcapturesession

我可以从iOS后置摄像头捕捉图像。一切都运作得很完美,除了我希望它按照UIView中的界限拍摄照片。

我的代码如下:

- (void)viewDidLoad
{
    [super viewDidLoad];
    // Do any additional setup after loading the view.

    session = [[AVCaptureSession alloc] init];

    session.sessionPreset = AVCaptureSessionPresetMedium;

    captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
    captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    captureVideoPreviewLayer.frame = vImagePreview.bounds;
    [vImagePreview.layer addSublayer:captureVideoPreviewLayer];

    AVCaptureDevice *device = [self backFacingCameraIfAvailable];
    NSError *error = nil;
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
    if (!input) {
        // Handle the error appropriately.
        NSLog(@"ERROR: trying to open camera: %@", error);
    }
    [session addInput:input];

    stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
    NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
    [stillImageOutput setOutputSettings:outputSettings];
    AVCaptureConnection *videoConnection = nil;
    for (AVCaptureConnection *connection in stillImageOutput.connections) {
        for (AVCaptureInputPort *port in [connection inputPorts]) {
            if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
                videoConnection = connection;
                break;
            }
        }
        if (videoConnection) {
            break;
        }
    }

    [session startRunning];

    [session addOutput:stillImageOutput];
}

-(AVCaptureDevice *)backFacingCameraIfAvailable{

    NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    AVCaptureDevice *captureDevice = nil;
    for (AVCaptureDevice *device in videoDevices){
        if (device.position == AVCaptureDevicePositionBack){
            captureDevice = device;
            break;
        }
    }

    //  couldn't find one on the back, so just get the default video device.
    if (!captureDevice){
        captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    }
    return captureDevice;
}

以下是捕获图像的代码:

- (IBAction)captureTask {
    AVCaptureConnection *videoConnection = nil;
    for (AVCaptureConnection *connection in stillImageOutput.connections){
        for (AVCaptureInputPort *port in [connection inputPorts]){

            if ([[port mediaType] isEqual:AVMediaTypeVideo]){

                videoConnection = connection;
                break;
            }
        }
        if (videoConnection) {
            break;
        }
    }

    NSLog(@"about to request a capture from: %@", stillImageOutput);
    [stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
     {
         CFDictionaryRef exifAttachments = CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
         if (exifAttachments) {
             // Do something with the attachments.
             NSLog(@"attachements: %@", exifAttachments);
         } else {
             NSLog(@"no attachments");
         }

         NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
         UIImage *image = [[UIImage alloc] initWithData:imageData];
         stillImage = image;

     }];
}

我面临的问题是,它正在拍摄照片,然后保存到stillImage,但是,根据我的判断,图像适用于整个iPhone屏幕。这不是我创建的UIView *vImagePreview的范围。有没有办法剪切捕获图像的边界??

[编辑]

阅读文档后,我意识到图像是正确的分辨率,如下所示:session.sessionPreset = AVCaptureSessionPresetMedium;。有没有办法让图像像一个正方形?就像Instagram制作图像一样?根据文档的所有会话预设都不是正方形:(

我尝试了以下内容:

captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResize;

但是,它仅调整图像大小以适合当前视图,不会生成方形图像。

2 个答案:

答案 0 :(得分:6)

我理解您的挫折感,预设应该可以自定义或有更多选择!我对我的图像所做的就是围绕中心进行裁剪,为此我编写了以下代码:

- (UIImage *)crop:(UIImage *)image from:(CGSize)src to:(CGSize)dst
{
    CGPoint cropCenter = CGPointMake((src.width/2), (src.height/2));
    CGPoint cropStart = CGPointMake((cropCenter.x - (dst.width/2)), (cropCenter.y - (dst.height/2)));
    CGRect cropRect = CGRectMake(cropStart.x, cropStart.y, dst.width, dst.height);
    CGImageRef cropRef = CGImageCreateWithImageInRect(image.CGImage, cropRect);
    UIImage* cropImage = [UIImage imageWithCGImage:cropRef];
    CGImageRelease(cropRef);

    return cropImage;
}

src代表原始尺寸,dst代表裁剪尺寸;而image当然是你要裁剪的图像。

答案 1 :(得分:0)

如果设备是视网膜显示器, 那么这个截图就像下面提到的那样:

if ([[UIScreen mainScreen] respondsToSelector:@selector(scale)] == YES && [[UIScreen mainScreen] scale] == 2.00)
{
    CGPoint cropCenter = CGPointMake((src.width), (src.height));
    CGPoint cropStart = CGPointMake((cropCenter.x - (dst.width)), (cropCenter.y - (dst.height)));
    CGRect cropRect = CGRectMake(cropStart.x, cropStart.y, dst.width*2, dst.height*2);
    CGImageRef cropRef = CGImageCreateWithImageInRect(image.CGImage, cropRect);
    UIImage* cropImage = [UIImage imageWithCGImage:cropRef];
    CGImageRelease(cropRef);
    return cropImage;
}
else
{
    CGPoint cropCenter = CGPointMake((src.width/2), (src.height/2));
    CGPoint cropStart = CGPointMake((cropCenter.x - (dst.width/2)), (cropCenter.y - (dst.height/2)));
    CGRect cropRect = CGRectMake(cropStart.x, cropStart.y, dst.width, dst.height);
    CGImageRef cropRef = CGImageCreateWithImageInRect(image.CGImage, cropRect);
    UIImage* cropImage = [UIImage imageWithCGImage:cropRef];
    CGImageRelease(cropRef);

    return cropImage;
}