appendSampleBuffer与音频AVAssetWriterInput“泄漏”内存,直到endSessionAtSourceTime

时间:2013-02-13 06:07:59

标签: ios objective-c memory-leaks audio-recording avassetwriter

AVAssetWriterInput appendSampleBuffer我有一个奇怪的记忆“泄漏”。我正在同时写视频和音频,所以我有一个AVAssetWriter有两个输入,一个用于视频,一个用于音频:

self.videoWriter = [[[AVAssetWriter alloc] initWithURL:[self.currentVideo currentVideoClipLocalURL]
                                              fileType:AVFileTypeMPEG4
                                                 error:&error] autorelease];
...
self.videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
                                                           outputSettings:videoSettings];
self.videoWriterInput.expectsMediaDataInRealTime = YES;
[self.videoWriter addInput:self.videoWriterInput];
...
self.audioWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio
                                                           outputSettings:audioSettings];
self.audioWriterInput.expectsMediaDataInRealTime = YES;
[self.videoWriter addInput:self.audioWriterInput];

我开始写作,表面上一切正常。视频和音频会被写入并对齐等等。但是,我将代码放入分配工具并注意到以下内容:

CoreMedia allocations

音频字节会保留在内存中,我会在一秒钟内证明。这是内存中的增长。音频字节仅在我调用[self.videoWriter endSessionAtSourceTime:...]后才会释放,您将其视为内存使用量急剧下降。这是我的音频编写代码,它作为一个块被分派到一个串行队列中:

@autoreleasepool
{
    // The objects that will hold the audio data
    CMSampleBufferRef sampleBuffer;
    CMBlockBufferRef  blockBuffer1;
    CMBlockBufferRef  blockBuffer2;

    size_t nbytes = numSamples * asbd_.mBytesPerPacket;

    OSStatus status = noErr;
    status = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault,
                                                data,
                                                nbytes,
                                                kCFAllocatorNull,
                                                NULL,
                                                0,
                                                nbytes,
                                                kCMBlockBufferAssureMemoryNowFlag,
                                                &blockBuffer1);

    if (status != noErr)
    {
        NLog(@"CMBlockBufferCreateWithMemoryBlock error at buffer 1");
        return;
    }

    status = CMBlockBufferCreateContiguous(kCFAllocatorDefault,
                                           blockBuffer1,
                                           kCFAllocatorDefault,
                                           NULL,
                                           0,
                                           nbytes,
                                           kCMBlockBufferAssureMemoryNowFlag | kCMBlockBufferAlwaysCopyDataFlag,
                                           &blockBuffer2);

    if (status != noErr)
    {
        NSLog(@"CMBlockBufferCreateWithMemoryBlock error at buffer 2");
        CFRelease(blockBuffer1);
        return;
    }

    // Finally, create the CMSampleBufferRef
    status = CMAudioSampleBufferCreateWithPacketDescriptions(kCFAllocatorDefault,
                                                             blockBuffer2,
                                                             YES,   // Yes data is ready
                                                             NULL,  // No callback needed to make data ready
                                                             NULL,
                                                             audioFormatDescription_,
                                                             1,
                                                             timestamp,
                                                             NULL,
                                                             &sampleBuffer);


    if (status != noErr)
    {
        NSLog(@"CMAudioSampleBufferCreateWithPacketDescriptions error.");
        CFRelease(blockBuffer1);
        CFRelease(blockBuffer2);
        return;
    }

    if ([self.audioWriterInput isReadyForMoreMediaData])
    {
        if (![self.audioWriterInput appendSampleBuffer:sampleBuffer])
        {
            NSLog(@"Couldn't append audio sample buffer: %d", numAudioCallbacks_);
        }
    } else {
        NSLog(@"AudioWriterInput isn't ready for more data.");
    }

    // One release per create
    CFRelease(blockBuffer1);
    CFRelease(blockBuffer2);
    CFRelease(sampleBuffer);
}

正如您所看到的,我每次创建时都会释放一次缓冲区。我已将“泄漏”追溯到附加音频缓冲区的行:

[self.audioWriterInput appendSampleBuffer:sampleBuffer]

我通过评论该行证明了这一点,之后我得到了以下“无泄漏”分配图(虽然现在录制的视频现在没有音频):

No leak

我尝试了另一件事,即添加appendSamplebuffer行,而是双重发布blockBuffer2

CFRelease(blockBuffer1);
CFRelease(blockBuffer2);
CFRelease(blockBuffer2); // Double release to test the hypothesis that appendSamplebuffer is retaining this
CFRelease(sampleBuffer);

这样做 导致双重免费,表示此时blockBuffer2的保留计数为2.这产生了同样的“泄漏 - 免费“分配图表,但是当我调用[self.videoWriter endSessionAtSourceTime:...]时,我从双版本中发生崩溃(表明self.videoWriter正试图释放其blockBuffer2的所有指针已被传入的。)

如果相反,我尝试以下方法:

CFRelease(blockBuffer1);
CFRelease(blockBuffer2);
CMSampleBufferInvalidate(sampleBuffer); // Invalidate sample buffer
CFRelease(sampleBuffer);

然后[self.audioWriterInput appendSampleBuffer:sampleBuffer] 在此之后,每次调用后附加视频帧的调用都会失败。

所以我的结论是,AVAssetWriterAVAssetWriterInput保留blockBuffer2,直到视频完成录制。显然,如果视频录制的时间足够长,这可能会导致实际的内存问题。我做错了吗?

编辑:我得到的音频字节是PCM格式,而我正在写的视频格式是MPEG4,该视频的音频格式是MPEG4AAC。视频编写器是否可能正在执行PCM - >即时AAC格式,这就是为什么它会被缓冲?

1 个答案:

答案 0 :(得分:1)

由于你已经等了一个月的答案,我会给你一个不太理想但可行的答案。

您可以使用ExtendedAudioFile函数编写单独的文件。然后你就可以和AVComposition一起播放视频和音频。我想你可以使用AVFoundation将caf和视频合成在一起而不需要重新编码,如果你需要它们在录制结束时合成。

这会让你开始运行,然后你可以在闲暇时解决内存泄漏。