如何创建实时图像效果处理应用程序iOS

时间:2013-05-29 04:23:25

标签: ios image-processing

我使用AVCaptureSession从iPhone的相机接收图像。它在委托函数中返回图像。在这个函数中,我创建了图像并调用其他线程来处理这个图像:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
    //    static bool isFirstTime = true;
    //    if (isFirstTime == false) {
    //        return;
    //    }
    //    isFirstTime = false;

    NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    //Lock the image buffer
    CVPixelBufferLockBaseAddress(imageBuffer,0);

    //Get information about the image
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);

    //Create a CGImageRef from the CVImageBufferRef
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst/*kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast*/);
    CGImageRef newImage = CGBitmapContextCreateImage(newContext);

    // release some components
    CGContextRelease(newContext);
    CGColorSpaceRelease(colorSpace);

    UIImage* uiimage = [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationDown];
    CGImageRelease(newImage);
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    //[self performSelectorOnMainThread:@selector(setImageForImageView:) withObject:uiimage waitUntilDone:YES];
    if(processImageThread == nil || (processImageThread != nil && processImageThread.isExecuting == false)){
        [processImageThread release];
        processImageThread = [[NSThread alloc] initWithTarget:self selector:@selector(processImage:) object:uiimage];
        [processImageThread start];
    }

    [pool drain];
}

我在另一个线程上处理图像,使用CIFilters:

- (void) processImage:(UIImage*)image{
    NSLog(@"Begin process");
    CIImage* ciimage = [CIImage imageWithCGImage:image.CGImage];
    CIFilter* filter = [CIFilter filterWithName:@"CIColorMonochrome"];// keysAndValues:kCIInputImageKey, ciimage, "inputRadius", [NSNumber numberWithFloat:10.0f], nil];
    [filter setDefaults];
    [filter setValue:ciimage forKey:@"inputImage"];
    [filter setValue:[CIColor colorWithRed:0.5 green:0.5 blue:1.0] forKey:@"inputColor"];
    CIImage* ciResult = [filter outputImage];

    CIContext* context = [CIContext contextWithOptions:nil];
    CGImageRef cgImage = [context createCGImage:ciResult fromRect:[ciResult extent]];
    UIImage* uiResult = [UIImage imageWithCGImage:cgImage scale:1.0 orientation:UIImageOrientationRight];
    CFRelease(cgImage);

    [self performSelectorOnMainThread:@selector(setImageForImageView:) withObject:uiResult waitUntilDone:YES];
    NSLog(@"End process");
}

并设置图层的结果图像:

- (void) setImageForImageView:(UIImage*)image{
    self.view.layer.contents = image.CGImage;
}

非常滞后。我找到了一个开源,它创建了一个非常流畅的实时图像效果应用程序(也使用AVCaptureSession。那么,这里有什么区别(我的代码和它们的代码)?如何创建实时图像效果处理应用程序?

这是开源的链接:https://github.com/gobackspaces/DLCImagePickerController#readme

1 个答案:

答案 0 :(得分:4)

您在问题中指定的开源示例,使用BradLarson的优秀开源库GPUImage进行实时照片和视频处理。该库使用基于GPU的过滤器(OpenGL ES 2.0)进行图像处理。相比之下,它比core image框架使用的基于CPU的图像文件管理器更快。

<强> GPUImage

  

GPUImage框架是一个获得BSD许可的iOS库,可让您将GPU加速滤镜和其他效果应用于图像,实时摄像机视频和电影。与Core Image(iOS 5.0的一部分)相比,GPUImage允许您编写自己的自定义过滤器,支持部署到iOS 4.0,并且具有更简单的界面。但是,它目前缺少核心图像的一些更高级功能,例如面部检测。

     

对于像处理图像或实时视频帧这样的大规模并行操作,GPU比CPU具有一些显着的性能优势。在iPhone 4上,简单的图像过滤器在GPU上执行的速度比基于CPU的等效过滤器快100多倍。