使用EZAudio我希望尽可能创建单声道audioBufferList。在过去,每个audioBuffer有46个字节,但bufferDuration相对较小。首先,如果我使用下面的AudioStreamBasicDescription进行输入和输出
AudioStreamBasicDescription audioFormat;
audioFormat.mBitsPerChannel = 8 * sizeof(AudioUnitSampleType);
audioFormat.mBytesPerFrame = sizeof(AudioUnitSampleType);
audioFormat.mBytesPerPacket = sizeof(AudioUnitSampleType);
audioFormat.mChannelsPerFrame = 2;
audioFormat.mFormatFlags = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFramesPerPacket = 1;
audioFormat.mSampleRate = 44100;
并使用TPCircularBuffer作为传输器然后我在bufferList中使用mDataByteSize 4096得到两个缓冲区肯定是太多了。所以我尝试使用我之前的ASBD
audioFormat.mSampleRate = 8000.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 8;
audioFormat.mBytesPerPacket = 1;
audioFormat.mBytesPerFrame = 1;
现在mDataByteSize是128,我只有一个缓冲区,但TPCircularBuffer无法正确处理。我认为这是因为我只想使用一个频道。所以atm我拒绝TBCB并尝试将字节编码和解码为NSData,或者只是为了测试直接传递AudioBufferList,但即使对于第一个AudioStreamBasicDescription声音也太过扭曲。
我当前的代码
-(void)initMicrophone{
AudioStreamBasicDescription audioFormat;
//*
audioFormat.mBitsPerChannel = 8 * sizeof(AudioUnitSampleType);
audioFormat.mBytesPerFrame = sizeof(AudioUnitSampleType);
audioFormat.mBytesPerPacket = sizeof(AudioUnitSampleType);
audioFormat.mChannelsPerFrame = 2;
audioFormat.mFormatFlags = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFramesPerPacket = 1;
audioFormat.mSampleRate = 44100;
/*/
audioFormat.mSampleRate = 8000.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 8;
audioFormat.mBytesPerPacket = 1;
audioFormat.mBytesPerFrame = 1;
//*/
_microphone = [EZMicrophone microphoneWithDelegate:self withAudioStreamBasicDescription:audioFormat];
_output = [EZOutput outputWithDataSource:self withAudioStreamBasicDescription:audioFormat];
[EZAudio circularBuffer:&_cBuffer withSize:128];
}
-(void)startSending{
[_microphone startFetchingAudio];
[_output startPlayback];
}
-(void)stopSending{
[_microphone stopFetchingAudio];
[_output stopPlayback];
}
-(void)microphone:(EZMicrophone *)microphone
hasAudioReceived:(float **)buffer
withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels{
dispatch_async(dispatch_get_main_queue(), ^{
});
}
-(void)microphone:(EZMicrophone *)microphone
hasBufferList:(AudioBufferList *)bufferList
withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels{
//*
abufferlist = bufferList;
/*/
audioBufferData = [NSData dataWithBytes:bufferList->mBuffers[0].mData length:bufferList->mBuffers[0].mDataByteSize];
//*/
dispatch_async(dispatch_get_main_queue(), ^{
});
}
-(AudioBufferList*)output:(EZOutput *)output needsBufferListWithFrames:(UInt32)frames withBufferSize:(UInt32 *)bufferSize{
//*
return abufferlist;
/*/
// int bSize = 128;
// AudioBuffer audioBuffer;
// audioBuffer.mNumberChannels = 1;
// audioBuffer.mDataByteSize = bSize;
// audioBuffer.mData = malloc(bSize);
//// [audioBufferData getBytes:audioBuffer.mData length:bSize];
// memcpy(audioBuffer.mData, [audioBufferData bytes], bSize);
//
//
// AudioBufferList *bufferList = [EZAudio audioBufferList];
// bufferList->mNumberBuffers = 1;
// bufferList->mBuffers[0] = audioBuffer;
//
// return bufferList;
//*/
}
我知道输出中bSize
的值:needsBufferListWithFrames:withBufferSize:可能已更改。
我的主要功能是创建光线,就像它可以是单声道声音一样,将其编码为nsdata并将其解码为输出。你能告诉我我做错了吗?
答案 0 :(得分:0)
我有同样的问题,转移到AVAudioRecorder并设置我需要的参数我保留EZAudio(EZMicrophone)用于音频可视化这里是一个链接来实现这个: