从CVImageBufferRef获取内存的所有权

时间:2012-07-06 17:08:20

标签: objective-c ios memory-management avcapturesession

我正在创建一个简单的管道,从AVCaptureSession获取图像,在OpenCV中处理它们,然后在OpenGL中呈现它们。它基于RosyWriter但没有音频和录制功能。 OpenCV处理看起来像

- (void)processPixelBuffer: (CVImageBufferRef)pixelBuffer 
{
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
int bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
int bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
unsigned char *pixel = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer);

cv::Mat image = cv::Mat(bufferWidth,bufferHeight,CV_8UC4,pixel);
//do any processing
[self setDisplay_matrix:image];
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
}

到目前为止,在这个功能中,我没有复制任何内存,我想保持这种方式。问题是pixelBuffer可能仍然拥有display_image中包含的内存。处理代码可以分配或不分配新的存储器并将其存储在图像中。如果处理没有分配新内存,我必须使用display_matrix传递pixelBuffer以防止数据被擦除。有没有办法让我拥有记忆的所有权?我想破坏pixelBuffer而不破坏它指向的内存。

在相关的说明中,LockBaseAddress究竟做了什么?如果我传递一个cv :: Mat,CVImageBufferRef对我每次想用cv :: Mat修改/使用数据时都要锁定基地址吗?

1 个答案:

答案 0 :(得分:0)

您可以从基础地址数据创建数据提供程序而无需复制,然后从此数据提供程序创建UIImage。为了避免在引用此映像时重用缓冲区,您需要保留样本缓冲区和锁定基址。当您忘记此图像对象时,它们应自动解锁并释放:

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection
{
    // Retain sample buffer and lock base address
    CFRetain(sampleBuffer);
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(imageBuffer, 0);

    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    void *baseAddress = (void *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);

    UIImage *image = imageFromData(baseAddress, width, height, bytesPerRow, sampleBuffer);

    // Now you can store this UIImage as long as you want
}

我从这个项目https://github.com/k06a/UIImage-DecompressAndMap/blob/master/UIImage%2BDecompressAndMap.m获得了imageFromData并采纳了一点:

UIImage *imageFromData(void *data, size_t width, size_t height, size_t bytesPerRow, CMSampleBufferRef sampleBuffer)
{
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGDataProviderRef provider = CGDataProviderCreateWithData((void *)sampleBuffer, data, bytesPerRow * height, munmap_wrapper);
    CGImageRef inflatedImage = CGImageCreate(width, height, 8, 4*8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst, provider, NULL, NO, kCGRenderingIntentDefault);

    CGColorSpaceRelease(colorSpace);
    CGDataProviderRelease(provider);

    UIImage *img = [UIImage imageWithCGImage:inflatedImage scale:scale orientation:UIImageOrientationUp];
    CGImageRelease(inflatedImage);
    return img;
}

您还需要提供unlock_function

void unlock_function(void *info, const void *data, size_t size)
{
    // Unlock base address release sample buffer
    CMSampleBufferRef sampleBuffer = (CMSampleBufferRef)info;
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
    CFRelease(sampleBuffer);
}