将视频帧传递给osx上的Core Image

时间:2015-02-28 07:35:38

标签: macos avfoundation core-image

大家好!我在过去几周内从各种有用的资源中整理了这个东西(包括来自stackoverflow的很多帖子),尝试创建一些可以拍摄网络摄像头并在发生时检测微笑的东西(不妨画出来)面部周围的盒子和微笑也是如此,一旦它们被检测到,它们似乎很难。如果它的代码很混乱,请给我一些背风,因为我还在学习。 目前我一直试图将图像传递给CIImage,因此可以对面部进行分析(我计划在克服面部障碍后处理微笑)。因为如果我在(5)之后注释掉块,编译器会成功 - 它会在窗口中显示一个简单的AVCaptureVideoPreviewLayer。我认为这就是我所说的" rootLayer",所以它就像显示输出的第一层,在我检测到视频帧中的面孔后,我会显示"界限后面的一个矩形"任何检测到的面部覆盖在这一层之上的新层中,我称之为该层" previewLayer" ...是否正确?

但是在(5)之后的块中,编译器会抛出三个错误 -

  

架构x86_64的未定义符号:     " _CMCopyDictionaryOfAttachments",引自:          - AVRecorderDocument.o中的[AVRecorderDocument captureOutput:didOutputSampleBuffer:fromConnection:]     " _CMSampleBufferGetImageBuffer",引自:          - AVRecorderDocument.o中的[AVRecorderDocument captureOutput:didOutputSampleBuffer:fromConnection:]   ld:找不到架构x86_64的符号   clang:错误:链接器命令失败,退出代码为1(使用-v查看调用)

有人能告诉我哪里出错了,接下来的步骤是什么?

感谢您的帮助,我已经被困在这一点上几天了,我无法弄明白,我能找到的所有例子都是针对IOS而且没有工作在OSX。

    - (id)init
{
    self = [super init];
    if (self) {

        // Move the output part to another function
        [self addVideoDataOutput];

        // Create a capture session
        session = [[AVCaptureSession alloc] init];

        // Set a session preset (resolution)
        self.session.sessionPreset = AVCaptureSessionPreset640x480;

        // Select devices if any exist
        AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
        if (videoDevice) {
            [self setSelectedVideoDevice:videoDevice];
        } else {
            [self setSelectedVideoDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeMuxed]];
        }
        NSError *error = nil;
        //  Add an input
        videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
        [self.session addInput:self.videoDeviceInput];

        // Start the session (app opens slower if it is here but I think it is needed in order to send the frames for processing)
        [[self session] startRunning];


          // Initial refresh of device list
         [self refreshDevices];

    }
    return self;
}

-(void) addVideoDataOutput {
    // (1) Instantiate a new video data output object
    AVCaptureVideoDataOutput * captureOutput = [[AVCaptureVideoDataOutput alloc] init];
    captureOutput.videoSettings = @{ (NSString *) kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) };

    // discard if the data output queue is blocked (while CI processes the still image)
    captureOutput.alwaysDiscardsLateVideoFrames = YES;

    // (2) The sample buffer delegate requires a serial dispatch queue
    dispatch_queue_t captureOutputQueue;
    captureOutputQueue = dispatch_queue_create("CaptureOutputQueue", DISPATCH_QUEUE_SERIAL);
    [captureOutput setSampleBufferDelegate:self queue:captureOutputQueue];
    dispatch_release(captureOutputQueue);  //what does this do and should it be here or after we receive the processed image back?

    // (3) Define the pixel format for the video data output 
    NSString * key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
    NSNumber * value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
    NSDictionary * settings = @{key:value};
    [captureOutput setVideoSettings:settings];

    // (4) Configure the output port on the captureSession property
    if ( [self.session canAddOutput:captureOutput] )
    [session addOutput:captureOutput];

}
// Implement the Sample Buffer Delegate Method
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {

// I *think* I have a video frame now in some sort of image format... so have to convert it into a CIImage before I can process it:

    // (5) Convert CMSampleBufferRef to CVImageBufferRef, then to a CI Image (per weichsel's answer in July '13)
    CVImageBufferRef cvFrameImage = CMSampleBufferGetImageBuffer(sampleBuffer);  // Having trouble here, prog. stops and won't recognise CMSampleBufferGetImageBuffer.
    CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
    self.ciFrameImage = [[CIImage alloc] initWithCVImageBuffer:cvFrameImage options:(__bridge NSDictionary *)attachments];
    //self.ciFrameImage = [[CIImage alloc] initWithCVImageBuffer:cvFrameImage];

    //OK so it is a CIImage. Find some way to send it to a separate CIImage function to find the faces, then smiles.  Then send it somewhere else to be displayed on top of AVCaptureVideoPreviewLayer
    //TBW

}


- (NSString *)windowNibName
{
    return @"AVRecorderDocument";
}


- (void)windowControllerDidLoadNib:(NSWindowController *) aController
{
    [super windowControllerDidLoadNib:aController];

    // Attach preview to session
    CALayer *rootLayer = self.previewView.layer;
    [rootLayer setMasksToBounds:YES]; //aaron added
    self.previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
    [self.previewLayer setBackgroundColor:CGColorGetConstantColor(kCGColorBlack)];
    [self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
    [self.previewLayer setFrame:[rootLayer bounds]];
    //[newPreviewLayer setAutoresizingMask:kCALayerWidthSizable | kCALayerHeightSizable];  //don't think I need this for OSX?
    [self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
    [rootLayer addSublayer:previewLayer];
//  [newPreviewLayer release];  //what's this for?


}

1 个答案:

答案 0 :(得分:1)

(从评论部分移动)

哇。我想两天和一个StackOverflow帖子就是要弄清楚我还没有将CoreMedia.framework添加到我的项目中。