当我从捕获输出协议中调用它时,为什么我的图像没有更新?

时间:2012-03-01 10:01:26

标签: ios xcode avfoundation

我正在尝试做一些非常简单的事情。我希望以全屏显示视频图层,并且每秒更新一次使用当时获得的CMSampleBufferRef的UIImage。但是我遇到了两个不同的问题。第一个是改变:

[connection setVideoMaxFrameDuration:CMTimeMake(1, 1)];
[connection setVideoMinFrameDuration:CMTimeMake(1, 1)];

还会修改视频预览图层,我认为它只会修改av基金会向代表发送信息的速度,但它似乎会影响整个会话(看起来更加明显)。所以这使我的视频每秒更新一次。我想我可以省略这些行,只需在委托中添加一个计时器,以便每秒将CMSampleBufferRef发送到另一个方法来处理它。但如果这是正确的方法,我不知道。

我的第二个问题是UIImageView没有更新,或者有时它只更新一次并且之后不会更改。我正在使用此方法来更新它:

- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection {
    //NSData *jpeg = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:sampleBuffer] ;
    UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
    [imageView setImage:image];
    // Add your code here that uses the image.
    NSLog(@"update");
}

我从苹果的例子中得到了什么。通过读取更新消息检查每秒正确调用该方法。但图像根本没有变化。 sampleBuffer也会被自动销毁或者我必须释放它吗?

这是另外两个重要的方法: 查看已加载:

- (void)viewDidLoad
{
    [super viewDidLoad];
    // Do any additional setup after loading the view, typically from a nib.

    session = [[AVCaptureSession alloc] init];

    // Add inputs and outputs.
    if ([session canSetSessionPreset:AVCaptureSessionPreset640x480]) {
        session.sessionPreset = AVCaptureSessionPreset640x480;
    }
    else {
        // Handle the failure.
        NSLog(@"Cannot set session preset to 640x480");
    }

    AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

    NSError *error = nil;
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];

    if (!input) {
        // Handle the error appropriately.
        NSLog(@"Could create input: %@", error);
    }

    if ([session canAddInput:input]) {
        [session addInput:input];
    }
    else {
        // Handle the failure.
        NSLog(@"Could not add input");
    }

    // DATA OUTPUT
    dataOutput = [[AVCaptureVideoDataOutput alloc] init];

    if ([session canAddOutput:dataOutput]) {
        [session addOutput:dataOutput];

        dataOutput.videoSettings = 
        [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
                                    forKey: (id)kCVPixelBufferPixelFormatTypeKey];
        //dataOutput.minFrameDuration = CMTimeMake(1, 15);
        //dataOutput.minFrameDuration = CMTimeMake(1, 1);
        AVCaptureConnection *connection = [dataOutput connectionWithMediaType:AVMediaTypeVideo];

        [connection setVideoMaxFrameDuration:CMTimeMake(1, 1)];
        [connection setVideoMinFrameDuration:CMTimeMake(1, 1)];

    }
    else {
        // Handle the failure.
        NSLog(@"Could not add output");
    }
    // DATA OUTPUT END

    dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
    [dataOutput setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);


    captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];

    [captureVideoPreviewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];

    [captureVideoPreviewLayer setBounds:videoLayer.layer.bounds];
    [captureVideoPreviewLayer setPosition:videoLayer.layer.position];

    [videoLayer.layer addSublayer:captureVideoPreviewLayer];

    [session startRunning];
}

将CMSampleBufferRef转换为UIImage:

- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer 
{
    // Get a CMSampleBuffer's Core Video image buffer for the media data
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    // Lock the base address of the pixel buffer
    CVPixelBufferLockBaseAddress(imageBuffer, 0); 

    // Get the number of bytes per row for the pixel buffer
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); 

    // Get the number of bytes per row for the pixel buffer
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    // Get the pixel buffer width and height
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 

    // Create a device-dependent RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

    // Create a bitmap graphics context with the sample buffer data
    CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, 
                                                 bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    // Create a Quartz image from the pixel data in the bitmap graphics context
    CGImageRef quartzImage = CGBitmapContextCreateImage(context); 
    // Unlock the pixel buffer
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    // Free up the context and color space
    CGContextRelease(context); 
    CGColorSpaceRelease(colorSpace);

    // Create an image object from the Quartz image
    UIImage *image = [UIImage imageWithCGImage:quartzImage];

    // Release the Quartz image
    CGImageRelease(quartzImage);

    return (image);
}

提前感谢您提供的任何帮助。

1 个答案:

答案 0 :(得分:8)

来自captureOutput:didOutputSampleBuffer:fromConnection:方法的文档:

  

在输出的sampleBufferCallbackQueue属性指定的调度队列上调用此方法。

这意味着如果您需要使用此方法中的缓冲区更新UI,则需要在主队列上执行此操作,如下所示:

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer: (CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection {

    UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
    dispatch_async(dispatch_get_main_queue(), ^{
        [imageView setImage:image];
    });
}

编辑:关于你的第一个问题: 我不确定我是否理解这个问题,但是如果你想每秒只更新一次图像,你也可以在“didOutputSampleBuffer”方法中使用“lastImageUpdateTime”值进行比较,看看是否有足够的时间通过更新那里的图像,否则忽略样本缓冲区。