为什么我不能复制AVCapture会话?

时间:2012-06-27 15:59:08

标签: objective-c ios uiview avcapturesession

我有一个iOS应用,使用AVCaptureSessionAVCaptureVideoPreviewLayerCALayerUIImageView来捕获设备的相机输入。

问题是,我需要在两个不同的视图* 中显示一个 AVCapture会话。

现在,第一个AVCapture“View”工作并显示视频没问题,但第二个显示几毫秒然后冻结(甚至没有渲染完整的帧)。

我不确定这是否可行,因为一次只有一个AVCaptureSession可以捕获设备的摄像头输入(据我所知 - 如果不是这样,那么它必须存在内存问题)。

如何在两个不同的视图中使用相同的AVCapture会话?

以下是我正在使用的代码:

//CameraControl.h

#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#import <CoreGraphics/CoreGraphics.h>
#import <CoreVideo/CoreVideo.h>
#import <CoreMedia/CoreMedia.h>

@interface CameraControl : UIViewController <AVCaptureVideoDataOutputSampleBufferDelegate>
{
    AVCaptureSession *_captureSession;
    UIImageView *_imageView;
    CALayer *_customLayer;
    AVCaptureVideoPreviewLayer *_prevLayer;
}

// The capture session takes the input from the camera and capture it
@property (nonatomic, retain) AVCaptureSession *captureSession;

// The UIImageView we use to display the image generated from the imageBuffer
@property (nonatomic, retain) UIImageView *imageView;
// The CALayer we use to display the CGImageRef generated from the imageBuffer
@property (nonatomic, retain) CALayer *customLayer;
//The CALAyer customized by apple to display the video corresponding to a capture session
@property (nonatomic, retain) AVCaptureVideoPreviewLayer *prevLayer;

// This method initializes the capture session
- (void)initCapture;
@end

现在这是实施:

//CameraControl.m

#import "CameraControl.h"
#import <MobileCoreServices/MobileCoreServices.h>

@interface CameraControl ()

@end

@implementation CameraControl
@synthesize captureSession = _captureSession;
@synthesize imageView = _imageView;
@synthesize customLayer = _customLayer;
@synthesize prevLayer = _prevLayer;

- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation
{
    if ([[UIDevice currentDevice] userInterfaceIdiom] == UIUserInterfaceIdiomPhone) {
        return (interfaceOrientation = UIInterfaceOrientationPortrait);
    } else {
        return YES;
    }
}
- (id)init {
    self = [super init];
    if (self) {
        /*We initialize some variables (they might be not initialized depending on what is commented or not)*/
        self.imageView = nil;
        self.prevLayer = nil;
        self.customLayer = nil;
    }
    return self;
}

- (void)viewDidLoad {
    /*We intialize the capture*/
    [self initCapture];
}

- (void)initCapture {
    /*We setup the input*/
    AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput 
                                    deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] 
                                      error:nil];
   /*We setupt the output*/
   AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
   /*While a frame is processes in -captureOutput:didOutputSampleBuffer:fromConnection: delegate methods no other frames are added in the queue. If you don't want this behaviour set the property to NO */
   captureOutput.alwaysDiscardsLateVideoFrames = YES; 
   /*We specify a minimum duration for each frame (play with this settings to avoid having too many frames waiting in the queue because it can cause memory issues). It is similar to the inverse of the maximum framerate. In this example we set a min frame duration of 1/10 seconds so a maximum framerate of 10fps. We say that we are not able to process more than 10 frames per second.*/
   //captureOutput.minFrameDuration = CMTimeMake(1, 10);

    /*We create a serial queue to handle the processing of our frames*/
    dispatch_queue_t queue;
    queue = dispatch_queue_create("cameraQueue", NULL);
    [captureOutput setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);
    // Set the video output to store frame in BGRA (It is supposed to be faster)
    NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 
    NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; 
    NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
    [captureOutput setVideoSettings:videoSettings]; 
    /*And we create a capture session*/
    self.captureSession = [[AVCaptureSession alloc] init];
    /*We add input and output*/
    [self.captureSession addInput:captureInput];
    [self.captureSession addOutput:captureOutput];
        /*We use medium quality, ont the iPhone 4 this demo would be laging too much, the conversion in UIImage and CGImage demands too much ressources for a 720p resolution.*/
        [self.captureSession setSessionPreset:AVCaptureSessionPresetMedium];
    /*We add the Custom Layer (We need to change the orientation of the layer so that the video is displayed correctly)*/
    self.customLayer = [CALayer layer];
    self.customLayer.frame = self.view.bounds;
    //self.customLayer.transform = CATransform3DRotate(CATransform3DIdentity, M_PI/2.0f, 0, 0, 1);
    self.customLayer.contentsGravity = kCAGravityResizeAspectFill;
    [self.view.layer addSublayer:self.customLayer];
    /*We add the imageView*/
        [self.view addSubview:self.imageView];
    /*We add the preview layer*/
    self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    [self.view.layer addSublayer: self.prevLayer];
    /*We start the capture*/
    [self.captureSession startRunning];

}

#pragma mark -
#pragma mark AVCaptureSession delegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection 
{ 
/*We create an autorelease pool because as we are not in the main_queue our code is
 not executed in the main thread. So we have to create an autorelease pool for the thread we are in*/

NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0); 
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
size_t width = CVPixelBufferGetWidth(imageBuffer); 
size_t height = CVPixelBufferGetHeight(imageBuffer);  

/*Create a CGImageRef from the CVImageBufferRef*/
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext); 

/*We release some components*/
CGContextRelease(newContext); 
CGColorSpaceRelease(colorSpace);

/*We display the result on the custom layer. All the display stuff must be done in the main thread because
 UIKit is no thread safe, and as we are not in the main thread (remember we didn't use the main_queue)
 we use performSelectorOnMainThread to call our CALayer and tell it to display the CGImage.*/
[self.customLayer performSelectorOnMainThread:@selector(setContents:) withObject: (id) newImage waitUntilDone:YES];

/*We display the result on the image view (We need to change the orientation of the image so that the video is displayed correctly).
    Same thing as for the CALayer we are not in the main thread so ...*/
    UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];

    /*We relase the CGImageRef*/
    CGImageRelease(newImage);

    [self.imageView performSelectorOnMainThread:@selector(setImage:) withObject:image waitUntilDone:YES];

    /*We unlock the  image buffer*/
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    [pool drain];
} 

#pragma mark -
#pragma mark Memory management

- (void)viewDidUnload {
    self.imageView = nil;
    self.customLayer = nil;
    self.prevLayer = nil;
}

- (void)dealloc {
    [self.captureSession release];
    [super dealloc];
}


@end

2 个答案:

答案 0 :(得分:2)

要执行此操作,您需要在会话中附加多个输出,而不是多个会话。引用AV Foundation Programming Guide

  

要从捕获会话中获取输出,请添加一个或多个输出   ...
  您可以使用addOutput:。

将输出添加到捕获会话

答案 1 :(得分:2)

为什么需要两次会话?如果你说你想要达到的目的,那么建议就更容易了。

由于你只有一个摄像头(你不能同时使用背面和正面)和一个麦克风,你通常只需要一个会话。有两个会话意味着从摄像头发送两个图像缓冲区,这会给设备带来不必要的压力。

如果要更改会话中的任何参数,可以在运行时动态执行:

- (IBAction)switchHD:(id)sender {
  [AVsession beginConfiguration];
  if(sender.selectedSegmentIndex){
    session.sessionPreset = AVCaptureSessionPreset1280x720;
    self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspect;
  }else{
    session.sessionPreset = AVCaptureSessionPreset640x480;
    self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
  }
  [AVsession commitConfiguration]; 
}