如何从NSdata创建AudioBuffer(音频)

时间:2016-05-19 06:02:02

标签: ios avaudioplayer audiobuffer

- (void)captureOutput:(AVCaptureOutput *)captureOutput  didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {

    AudioBufferList audioBufferList;
    NSMutableData *data= [NSMutableData data];
    CMBlockBufferRef blockBuffer;
    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);

    for( int y=0; y< audioBufferList.mNumberBuffers; y++ ){

        AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
        Float32 *frame = (Float32*)audioBuffer.mData;

        [data appendBytes:frame length:audioBuffer.mDataByteSize];

    }

    CFRelease(blockBuffer);
    CFRelease(ref);

    AVAudioPlayer *player = [[AVAudioPlayer alloc] initWithData:data error:nil];
    [player play];
}

我不知道如何将NSdata转换为音频缓冲区。

带有以上数据的AVAudioPlayer返回nil并显示以下错误:错误=错误域= NSOSStatusErrorDomain代码= 1954115647

1 个答案:

答案 0 :(得分:0)

尝试将播放器添加为实例变量

@property (strong, nonatomic) AVAudioPlayer *player;

然后从captureOutput:

分配您的播放器
self.player = [[AVAudioPlayer alloc] initWithData:data error:nil];

希望这会奏效:)