在Mac / Cocoa中捕获相机缓冲区

时间:2011-12-06 19:16:33

标签: objective-c macos cocoa qt multimedia

在我的应用程序中,我需要从Camera捕获Image缓冲区并将其传递到网络的另一端

我使用了以下代码,

-(void)startVideoSessionInSubThread{
    // Create the capture session

    pPool = [[NSAutoreleasePool alloc]init];

    mCaptureSession = [[QTCaptureSession alloc] init] ;

    // Connect inputs and outputs to the session    
    BOOL success = NO;
    NSError *error;

    // Find a video device  

    QTCaptureDevice *videoDevice = [QTCaptureDevice defaultInputDeviceWithMediaType:QTMediaTypeVideo];
    success = [videoDevice open:&error];


    // If a video input device can't be found or opened, try to find and open a muxed input device

    if (!success) {
        videoDevice = [QTCaptureDevice defaultInputDeviceWithMediaType:QTMediaTypeMuxed];
        success = [videoDevice open:&error];

    }

    if (!success) {
        videoDevice = nil;
        // Handle error


    }

    if (videoDevice) {
        //Add the video device to the session as a device input

        mCaptureVideoDeviceInput = [[QTCaptureDeviceInput alloc] initWithDevice:videoDevice];
        success = [mCaptureSession addInput:mCaptureVideoDeviceInput error:&error];
        if (!success) {
            // Handle error
        }


        mCaptureDecompressedVideoOutput = [[QTCaptureDecompressedVideoOutput alloc] init];

        [mCaptureDecompressedVideoOutput setPixelBufferAttributes:[NSDictionary dictionaryWithObjectsAndKeys:
                                                                   [NSNumber numberWithDouble:320.0], (id)kCVPixelBufferWidthKey,
                                                                   [NSNumber numberWithDouble:240.0], (id)kCVPixelBufferHeightKey,
                                                                   [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey,
                                                                   //   kCVPixelFormatType_32BGRA , (id)kCVPixelBufferPixelFormatTypeKey,      
                                                                   nil]];

        [mCaptureDecompressedVideoOutput setDelegate:self];

        [mCaptureDecompressedVideoOutput setMinimumVideoFrameInterval:0.0333333333333]; // to have video effect, 33 fps 

        success = [mCaptureSession addOutput:mCaptureDecompressedVideoOutput error:&error];

        if (!success) {
            [[NSAlert alertWithError:error] runModal];
            return;
        }

        [mCaptureView setCaptureSession:mCaptureSession];
        bVideoStart = NO;
        [mCaptureSession startRunning];
        bVideoStart = NO;

    }

}
-(void)startVideoSession{
    // start video from different session 
    [NSThread detachNewThreadSelector:@selector(startVideoSessionInSubThread) toTarget:self withObject:nil];
}
回调函数中的

// Do something with the buffer 
- (void)captureOutput:(QTCaptureOutput *)captureOutput didOutputVideoFrame:(CVImageBufferRef)videoFrame 
     withSampleBuffer:(QTSampleBuffer *)sampleBuffer 
       fromConnection:(QTCaptureConnection *)connection


    [self processImageBufferNew:videoFrame];

    return;
}

在函数processImageBufferNew中,我将Image添加到Queue中,它是一个同步队列, 现在有一个单独的线程,用于读取队列并处理缓冲区,

发生的事情是,如果我看到日志控件在Capture回调中频繁出现,那么发送帧变得非常慢并且队列大小增加得非常快,

有关设计的任何建议吗?

我正在分别运行网络线程,其中查询队列中最旧的节点,所以它可以顺序发送,通过日志,似乎在一分钟内,超过500个节点被添加,它导致内存增加和cpu饥饿。

我是否应该使用其他逻辑来捕获相机框架?

1 个答案:

答案 0 :(得分:1)

如果你不能像在QTCaptureDecompressedVideoOutput的captureOutput: didOutputVideoFrame: withSampleBuffer: fromConnection:]委托方法中那样快地通过网络发送帧,那么你将不得不在某个时刻开始丢帧(当你用完时内存,当你的固定节点数组上的空间用完时要发送等等)。

我建议选择某种网络数据包传输算法,其中丢帧不是那么明显或突然。更快的网络吞吐量意味着更少的帧丢弃。较慢的网络意味着不得发送更多帧。