放养UIImages然后转换为电影

时间:2012-07-03 07:47:17

标签: ios video uiimage avfoundation core-image

所以我一直致力于视频捕捉项目,该项目允许用户捕捉图像和视频并应用过滤器。我正在使用AVfoundation框架,我成功捕获静止图像,并将视频帧捕获为UIImage对象......唯一剩下的就是录制视频。

这是我的代码:

- (void)initCapture {

    AVCaptureSession *session = [[AVCaptureSession alloc] init];
    session.sessionPreset = AVCaptureSessionPresetMedium;



    AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

    NSError *error = nil;
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
    if (!input) {
        // Handle the error appropriately.
        NSLog(@"ERROR: trying to open camera: %@", error);
    }
    [session addInput:input];




    stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
    NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
    [stillImageOutput setOutputSettings:outputSettings];
    [session addOutput:stillImageOutput];


    captureOutput = [[AVCaptureVideoDataOutput alloc] init];
    captureOutput.alwaysDiscardsLateVideoFrames = YES; 


    dispatch_queue_t queue;
    queue = dispatch_queue_create("cameraQueue", NULL);
    [captureOutput setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);

    NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 

    NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; 

    NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
    [captureOutput setVideoSettings:videoSettings];    

    [session addOutput:captureOutput]; 

    [session startRunning];    
}




- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
       fromConnection:(AVCaptureConnection *)connection 
{ 
    NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

    CVPixelBufferLockBaseAddress(imageBuffer,0); 
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer);  
        CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef newImage = CGBitmapContextCreateImage(newContext); 


    CGContextRelease(newContext); 
    CGColorSpaceRelease(colorSpace);


    UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];

    CGImageRelease(newImage);

    UIImage *ima = [filter applyFilter:image];

    /*if(isRecording == YES)
    {
        [imageArray addObject:ima];  
    }
     NSLog(@"Count= %d",imageArray.count);*/

    [self.imageView performSelectorOnMainThread:@selector(setImage:) withObject:ima waitUntilDone:YES];


    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

   [pool drain];

} 

我尝试在可变数组中存储UIImages但这是一个愚蠢的想法。 有什么想法吗? 任何帮助将不胜感激

1 个答案:

答案 0 :(得分:1)

您使用的是CIFilter吗?如果没有,也许你应该寻求快速,基于GPU的转换。

您可能希望在生成后直接将相应的图像记录到AVAssetWriter。查看Apple提供的RosyWriter示例代码,了解他们这样做的方向。总之,它们利用AVAssetWriter将帧捕获到临时文件,然后在完成后将该文件存储到摄像机。

然而,有一个警告是,RosyWriter在我的第四代iPod touch上获得了4fps。他们正在对CPU上的像素进行蛮力改变。 Core Image使用基于GPU的过滤器,我能够达到12fps,在我看来,它仍然不是应该的。

祝你好运!