iOS CoreVideo内存泄漏

时间:2011-04-01 16:46:45

标签: ios memory-leaks core-video

在XCode中运行Instruments时,有人可以帮我跟踪这些CoreVideo内存泄漏吗?

基本上,当我按下自定义动作jpeg播放器上的“录制视频”按钮时会发生内存泄漏。由于Leaks Instruments没有指向我的任何电话,我无法确切地知道我的代码的哪一部分正在泄漏。顺便说一句,我正在使用iPad设备来测试泄漏。

来自Leaks Instruments的消息:

  • 负责图书馆= CoreVideo
  • 负责框架:    CVPixelBufferBacking :: initWithPixelBufferDescription(..)    CVObjectAlloc(...)    CVBuffer ::的init()

以下是处理服务器流式传输的每个动作jpeg帧的代码:

-(void)processServerData:(NSData *)data{    

/*
//render the video in the UIImage control
*/
UIImage *image =[UIImage imageWithData:data];
self.imageCtrl.image = image;

/*
//check if we are recording
*/
if (myRecorder.isRecording) {

    //create initial sample: todo:check if this is still needed
    if (counter==0) {

        self.buffer = [Recorder pixelBufferFromCGImage:image.CGImage size:myRecorder.imageSize];
        CVPixelBufferPoolCreatePixelBuffer (NULL, myRecorder.adaptor.pixelBufferPool, &buffer);

        if(buffer) 
        {
            CVBufferRelease(buffer);
        }
    }

    if (counter < myRecorder.maxFrames)
    {
        if([myRecorder.writerInput isReadyForMoreMediaData])
        {
            CMTime frameTime = CMTimeMake(1, myRecorder.timeScale);
            CMTime lastTime=CMTimeMake(counter, myRecorder.timeScale); 
            CMTime presentTime=CMTimeAdd(lastTime, frameTime);

            self.buffer = [Recorder pixelBufferFromCGImage:image.CGImage size:myRecorder.imageSize];

            [myRecorder.adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];

            if(buffer)
            {
                CVBufferRelease(buffer);
            }

            counter++;

            if (counter==myRecorder.maxFrames)
            {
                [myRecorder finishSession];

                counter=0;
                myRecorder.isRecording = NO;
            }
        }
        else
        {
            NSLog(@"adaptor not ready counter=%d ",counter );
        }
    }
}

}

这是pixelBufferFromCGImage函数:

+ (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image size:(CGSize) size{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                         [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                         [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                         nil];
CVPixelBufferRef pxbuffer = NULL;

CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
                                      size.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, 
                                      &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);

CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
                                             size.height, 8, 4*size.width, rgbColorSpace, 
                                             kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), 
                                       CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);

CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

return pxbuffer;

}

明确指出任何帮助!感谢

2 个答案:

答案 0 :(得分:2)

我重构了processFrame方法,我不再收到泄漏。

-(void) processFrame:(UIImage *) image {

    if (myRecorder.frameCounter < myRecorder.maxFrames)
    {
        if([myRecorder.writerInput isReadyForMoreMediaData])
        {
            CMTime frameTime = CMTimeMake(1, myRecorder.timeScale);
            CMTime lastTime=CMTimeMake(myRecorder.frameCounter, myRecorder.timeScale); 
            CMTime presentTime=CMTimeAdd(lastTime, frameTime);

            buffer = [Recorder pixelBufferFromCGImage:image.CGImage size:myRecorder.imageSize];

            if(buffer)
            {
                [myRecorder.adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];

                myRecorder.frameCounter++;

                CVBufferRelease(buffer);

                if (myRecorder.frameCounter==myRecorder.maxFrames)
                {
                    [myRecorder finishSession];

                    myRecorder.frameCounter=0;
                    myRecorder.isRecording = NO;
                }
            }
            else
            {
                NSLog(@"Buffer is empty");
            }
        }
        else
        {
            NSLog(@"adaptor not ready frameCounter=%d ",myRecorder.frameCounter );
        }
    }

}

答案 1 :(得分:0)

我没有看到任何太明显的东西。我注意到你在这里使用self.buffer和buffer。如果保留,您可能会在那里泄漏。如果CVPixelBufferPoolCreatePixelBuffer如果在self.buffer保留在第一行之后的第二行中分配内存,则第一行可能会泄漏。

    self.buffer = [Recorder pixelBufferFromCGImage:image.CGImage size:myRecorder.imageSize];
    CVPixelBufferPoolCreatePixelBuffer (NULL, myRecorder.adaptor.pixelBufferPool, &buffer);

希望有所帮助。