iPhone中的视频过滤速度很慢

时间:2012-01-08 13:56:26

标签: iphone ios

我正在尝试在iPhone中过滤视频。这是我的程序结构和源代码:

AppDelegate.h
AppDelegate.m
ViewController.h
ViewController.m

AppDelegate文件与默认文件相同。这是我的ViewController。

//ViewController.h

#import <UIKit/UIKit.h>
#import <GLKit/GLKit.h>
#import <AVFoundation/AVFoundation.h>
#import <CoreMedia/CoreMedia.h>
#import <CoreVideo/CoreVideo.h>
#import <QuartzCore/QuartzCore.h>
#import <CoreImage/CoreImage.h>
#import <ImageIO/ImageIO.h>

@interface ViewController : GLKViewController <AVCaptureVideoDataOutputSampleBufferDelegate>{
    AVCaptureSession *avCaptureSession;
    CIContext *coreImageContext;
    CIImage *maskImage;
    CGSize screenSize;
    CGContextRef cgContext;
    GLuint _renderBuffer;
    float scale;
}

@property (strong, nonatomic) EAGLContext *context;

-(void)setupCGContext;

@end

// ViewController.m
#import "ViewController.h"

@implementation ViewController

@synthesize context;

- (void)viewDidLoad
{
    [super viewDidLoad];
    self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
    if (!self.context) {
        NSLog(@"Failed to create ES context");
    }

    GLKView *view = (GLKView *)self.view;
    view.context = self.context;
    view.drawableDepthFormat = GLKViewDrawableDepthFormat24;

    coreImageContext = [CIContext contextWithEAGLContext:self.context];

    glGenRenderbuffers(1, &_renderBuffer);
    glBindRenderbuffer(GL_RENDERBUFFER, _renderBuffer);

    NSError *error;
    AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
    AVCaptureVideoDataOutput *dataOutput = [[AVCaptureVideoDataOutput alloc] init];

    [dataOutput setAlwaysDiscardsLateVideoFrames:YES]; 
    [dataOutput setVideoSettings:[NSDictionary  dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] 
                                                              forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
    [dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

    avCaptureSession = [[AVCaptureSession alloc] init];
    [avCaptureSession beginConfiguration];
    [avCaptureSession setSessionPreset:AVCaptureSessionPreset1280x720];
    [avCaptureSession addInput:input];
    [avCaptureSession addOutput:dataOutput];
    [avCaptureSession commitConfiguration];
    [avCaptureSession startRunning];

    [self setupCGContext];
    CGImageRef cgImg = CGBitmapContextCreateImage(cgContext);
    maskImage = [CIImage imageWithCGImage:cgImg];
    CGImageRelease(cgImg);
}

-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {

    CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
    CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
    image = [CIFilter   filterWithName:@"CISepiaTone" keysAndValues:kCIInputImageKey, 
                        image, @"inputIntensity", 
                        [NSNumber numberWithFloat:0.8], 
                        nil].outputImage;

    [coreImageContext drawImage:image atPoint:CGPointZero fromRect:[image extent] ];

    [self.context presentRenderbuffer:GL_RENDERBUFFER];
}

-(void)setupCGContext {
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    NSUInteger bytesPerPixel = 4;
    NSUInteger bytesPerRow = bytesPerPixel * screenSize.width;
    NSUInteger bitsPerComponent = 8;
    cgContext = CGBitmapContextCreate(NULL, screenSize.width, screenSize.height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);

    CGColorSpaceRelease(colorSpace);
}

棕褐色滤镜可以工作,但视频速度稍慢。当我不应用过滤器时,视频是正常的。关于如何改进视频并使其更快的任何想法?

感谢。

3 个答案:

答案 0 :(得分:11)

正如我所描述的here,Core Image中的棕褐色过滤器无法实时运行,但其他过滤器可能会运行。它取决于目标设备的硬件功能,以及iOS版本(Core Image在过去几个iOS版本中的性能显着提升)。

但是,如果我可以再次插入我的开源框架,GPUImage可以让你做得更快,更快。它可以在iPhone 4上的2.5毫秒内在640x480视频帧上应用棕褐色调滤波器,这对于来自该相机的30 FPS视频来说足够快。

以下代码将对iOS设备上的后置摄像头进行实时过滤,并在纵向视图中显示该视频:

videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];

sepiaFilter = [[GPUImageSepiaFilter alloc] init];
GPUImageRotationFilter *rotationFilter = [[GPUImageRotationFilter alloc] initWithRotation:kGPUImageRotateRight];

[videoCamera addTarget:rotationFilter];
[rotationFilter addTarget:sepiaFilter];
filterView = [[GPUImageView alloc] initWithFrame:self.view.bounds];
[self.view addSubview:filterView];
[sepiaFilter addTarget:filterView];

[videoCamera startCameraCapture];

答案 1 :(得分:3)

我现在意识到这是一个老问题,但是......

[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

该行正在主要(UI)线程上调用您的视频回调。

如果您将其更改为:

[dataOutput setSampleBufferDelegate:self
                              queue:dispatch_queue_create("cQ", DISPATCH_QUEUE_SERIAL)];

然后在你的回调中,如果你需要更新你的UI,你应该这样做:

dispatch_async(dispatch_get_main_queue(), ^{
    [coreImageContext drawImage:image atPoint:CGPointZero fromRect:[image extent] ];
    [self.context presentRenderbuffer:GL_RENDERBUFFER];
});

由于计算成本高昂的东西将在后台线程上执行,这将有很大帮助,并且图像绘制不会影响捕获。

旁注:

盲目地使用您在互联网上找到的示例代码而不了解该技术的工作方式并不是开发应用程序的好方法(很多人都对此感到内疚)

答案 2 :(得分:2)

以下内容:

CIFilter   filterWithName:@"CISepiaTone" 
每次获得缓冲区/帧时都会调用

。您只需要创建过滤器ONCE。所以将它移到外面,你仍然可以使用过滤器。