将音频样本缓冲区转换为数据,然后返回到音频样本缓冲区

时间:2019-03-04 09:44:56

标签: ios objective-c audio video

在录制视频时,我正在进行实时音频降噪。我从AVcapture委托的-captureOutput:didOutputSampleBuffer:fromConnection:回调获取音频samplebuffer。 ns进程:samplebuffer-> char *-> ns-> char *-> samplebuffer。我陷入了最后一步。 我得到了错误:

  

CMSampleBufferSetDataBufferFromAudioBufferList kCMSampleBufferError_RequiredParameterMissing = -12731。

最后一步的详细过程如下:

-(CMSampleBufferRef)createCMSampleBufferRefWithData:(char *)nsData
                                     dataLength:(uint32_t)dataLength
                                sampleBufferRef:(CMSampleBufferRef)originalSampleBuffer{

CMAudioFormatDescriptionRef cmAudioFormat=CMSampleBufferGetFormatDescription(originalSampleBuffer);
const AudioStreamBasicDescription *asbd=CMAudioFormatDescriptionGetStreamBasicDescription(cmAudioFormat);
CMItemCount numSamplesInBuffer = CMSampleBufferGetNumSamples(originalSampleBuffer);

CMSampleBufferRef audioSampleBuffer = NULL;
CMAudioFormatDescriptionRef format=NULL;
OSStatus status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault,
                                                 asbd,
                                                 0,
                                                 NULL,
                                                 0,
                                                 NULL,
                                                 NULL,
                                                 &format);
if (status!=noErr) {
    NSLog(@"CMAudioFormatDescriptionCreate failed. code=%d",(int)status);
    return NULL;
}    
CMTime pts=CMSampleBufferGetPresentationTimeStamp(originalSampleBuffer);
CMSampleTimingInfo timing = {CMTimeMake(1,asbd->mSampleRate),
    pts,  kCMTimeInvalid};
status = CMSampleBufferCreate(kCFAllocatorDefault,
                              NULL,
                              false,
                              NULL,
                              NULL,
                              format,
                              numSamplesInBuffer,
                              1,
                              &timing,
                              0,
                              NULL,
                              &audioSampleBuffer);
if (status!=noErr) {
    NSLog(@"CMSampleBufferCreate failed. code=%d",(int)status);
    return NULL;
}
AudioBuffer buffer;
buffer.mData = nsData;
buffer.mDataByteSize = dataLength;
buffer.mNumberChannels = asbd->mChannelsPerFrame;

AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
status= CMSampleBufferSetDataBufferFromAudioBufferList(audioSampleBuffer,
                                                       kCFAllocatorDefault,
                                                       kCFAllocatorDefault,
                                                       0,
                                                       &bufferList);
if (status!=noErr) {
    NSLog(@"CMSampleBufferSetDataBufferFromAudioBufferList failed. code=%d",(int)status);
    return NULL;
}
CFRelease(format);
return audioSampleBuffer;}

我已经看到以下链接,但是它们不起作用。

error converting AudioBufferList to CMBlockBufferRef

Converting an AudioBufferList to a CMSampleBuffer Produces Unexpected Results

CMSampleBufferSetDataBufferFromAudioBufferList returning error 12731

How to convert AudioBufferList to CMSampleBuffer?

他们的音频数据源是AudioUnit,但我的源是AVCapture。在数据源为AudioUnit的前提下,将TimingInfo设置为以下代码就可以了:

struct mach_timebase_info tinfo;
kern_return_t err =mach_timebase_info( &tinfo );
uint32_t hostTimeToNSFactor = tinfo.numer / tinfo.denom;
uint64_t timeNS = (uint64_t)(hostTime * hostTimeToNSFactor);
CMTime presentationTime = CMTimeMake(timeNS, 1000000000);
CMSampleTimingInfo timing = { CMTimeMake(1, 44100.00), presentationTime, kCMTimeInvalid };

但是我无法在CMSamplebuffer中获得AudioTimeStamp.hostTime。有人可以告诉我如何在CMSampulebuffer中获取hostTime或其他方法来解决-12731问题。

以下是原始Samplebuffer的数据:

Printing description of originalSampleBuffer:
CMSampleBuffer 0x102064500 retainCount: 1 allocator: 0x1b7085610
    invalid = NO
    dataReady = YES
    makeDataReadyCallback = 0x0
    makeDataReadyRefcon = 0x0
    formatDescription = <CMAudioFormatDescription 0x281e819e0 [0x1b7085610]> {
    mediaType:'soun' 
    mediaSubType:'lpcm' 
    mediaSpecific: {
        ASBD: {
            mSampleRate: 44100.000000 
            mFormatID: 'lpcm' 
            mFormatFlags: 0xc 
            mBytesPerPacket: 2 
            mFramesPerPacket: 1 
            mBytesPerFrame: 2 
            mChannelsPerFrame: 1 
            mBitsPerChannel: 16     } 
        cookie: {(null)} 
        ACL: {Mono}
        FormatList Array: {(null)} 
    } 
    extensions: {(null)}
}
    sbufToTrackReadiness = 0x0
    numSamples = 1024
    sampleTimingArray[1] = {
        {PTS = {31931558330/44100 = 724071.617, rounded}, DTS = {INVALID}, duration = {1/44100 = 0.000}},
    }
    sampleSizeArray[1] = {
        sampleSize = 2,
    }
    dataBuffer = 0x281e88f30

以下是我转换的内容:

Printing description of audioSampleBuffer:
CMSampleBuffer 0x102167540 retainCount: 1 allocator: 0x1b7085610
    invalid = NO
    dataReady = NO
    makeDataReadyCallback = 0x0
    makeDataReadyRefcon = 0x0
    formatDescription = <CMAudioFormatDescription 0x281e97f00 [0x1b7085610]> {
    mediaType:'soun' 
    mediaSubType:'lpcm' 
    mediaSpecific: {
        ASBD: {
            mSampleRate: 44100.000000 
            mFormatID: 'lpcm' 
            mFormatFlags: 0xc 
            mBytesPerPacket: 2 
            mFramesPerPacket: 1 
            mBytesPerFrame: 2 
            mChannelsPerFrame: 1 
            mBitsPerChannel: 16     } 
        cookie: {(null)} 
        ACL: {(null)}
        FormatList Array: {(null)} 
    } 
    extensions: {(null)}
}
    sbufToTrackReadiness = 0x0
    numSamples = 1024
    sampleTimingArray[1] = {
        {PTS = {31931558330/44100 = 724071.617, rounded}, DTS = {INVALID}, duration = {1/44100 = 0.000}},
    }
    dataBuffer = 0x0

我是音频和视频的新手,希望有人能帮助我。

0 个答案:

没有答案