我正在使用Opus for iOS(Objective-C和C ++)开发Voip应用程序
除了16000之外,它在8000,12000,24000和48000采样率下工作正常,其中应用程序在opus_encode方法上崩溃。
这就是我在做的事情:
m_oAudioSession = [AVAudioSession sharedInstance];
[m_oAudioSession setCategory:AVAudioSessionCategoryPlayAndRecord error:&m_oError];
[m_oAudioSession setMode:AVAudioSessionModeVoiceChat error:&m_oError];
[m_oAudioSession setPreferredSampleRate:VOIP_AUDIO_DRIVER_DEFAULT_SAMPLE_RATE error:&m_oError];
[m_oAudioSession setPreferredInputNumberOfChannels:VOIP_AUDIO_DRIVER_DEFAULT_INPUT_CHANNELS error:&m_oError];
[m_oAudioSession setPreferredOutputNumberOfChannels:VOIP_AUDIO_DRIVER_DEFAULT_OUTPUT_CHANNELS error:&m_oError];
[m_oAudioSession setPreferredIOBufferDuration:VOIP_AUDIO_DRIVER_DEFAULT_BUFFER_DURATION error:&m_oError];
[m_oAudioSession setActive:YES error:&m_oError];
常量:
VOIP_AUDIO_DRIVER_DEFAULT_SAMPLE_RATE is 16000
VOIP_AUDIO_DRIVER_DEFAULT_INPUT_CHANNELS is 1
VOIP_AUDIO_DRIVER_DEFAULT_OUTPUT_CHANNELS is 1
VOIP_AUDIO_DRIVER_DEFAULT_BUFFER_DURATION is 0.02
VOIP_AUDIO_DRIVER_FRAMES_PER_PACKET is 1
之后我使用m_oAudioSession.sampleRate和m_oAudioSession.IOBufferDuration的实际采样率和缓冲持续时间。它们被设置为m_fSampleRate和m_fBufferDuration变量。
配置如下:
//Describes audio component:
m_sAudioDescription.componentType = kAudioUnitType_Output;
m_sAudioDescription.componentSubType = kAudioUnitSubType_VoiceProcessingIO/*kAudioUnitSubType_RemoteIO*/;
m_sAudioDescription.componentFlags = 0;
m_sAudioDescription.componentFlagsMask = 0;
m_sAudioDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
m_sAudioFormat.mSampleRate = m_fSampleRate;
m_sAudioFormat.mFormatID = kAudioFormatLinearPCM;
m_sAudioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
m_sAudioFormat.mFramesPerPacket = VOIP_AUDIO_DRIVER_FRAMES_PER_PACKET;
m_sAudioFormat.mChannelsPerFrame = VOIP_AUDIO_DRIVER_DEFAULT_INPUT_CHANNELS;
m_sAudioFormat.mBitsPerChannel = (UInt32)(8 * m_iBytesPerSample);
m_sAudioFormat.mBytesPerFrame = (UInt32)((m_sAudioFormat.mBitsPerChannel / 8) * m_sAudioFormat.mChannelsPerFrame);
m_sAudioFormat.mBytesPerPacket = m_sAudioFormat.mBytesPerFrame * m_sAudioFormat.mFramesPerPacket;
m_sAudioFormat.mReserved = 0;
我做的计算是:
m_iBytesPerSample = sizeof(/*AudioSampleType*/SInt16);
//Calculating buffer size:
int samplesPerFrame = (int)(m_fBufferDuration * m_fSampleRate) + 1;
m_iBufferSizeBytes = samplesPerFrame * m_iBytesPerSample;
//Allocating input buffer:
UInt32 inputBufferListSize = offsetof(AudioBufferList, mBuffers[0]) + (sizeof(AudioBuffer) * m_sAudioFormat.mChannelsPerFrame);
m_sInputBuffer = (AudioBufferList *)VoipAlloc(inputBufferListSize);
m_sInputBuffer->mNumberBuffers = m_sAudioFormat.mChannelsPerFrame;
//Pre-mallocating buffers for AudioBufferLists
for(VoipUInt32 tmp_int1 = 0; tmp_int1 < m_sInputBuffer->mNumberBuffers; tmp_int1++)
{
m_sInputBuffer->mBuffers[tmp_int1].mNumberChannels = VOIP_AUDIO_DRIVER_DEFAULT_INPUT_CHANNELS;
m_sInputBuffer->mBuffers[tmp_int1].mDataByteSize = (UInt32)m_iBufferSizeBytes;
m_sInputBuffer->mBuffers[tmp_int1].mData = VoipAlloc(m_iBufferSizeBytes);
memset(m_sInputBuffer->mBuffers[tmp_int1].mData, 0, m_iBufferSizeBytes);
}
音频单元的读写是使用m_sInputBuffer完成的。
这是Opus的创作:
m_oEncoder = opus_encoder_create(m_iSampleRate, m_iNumberOfChannels, VOIP_AUDIO_CODECS_OPUS_APPLICATION_TYPE, &_error);
if (_error < 0)
{
fprintf(stderr, "VoipAudioCodecs error: failed to create an encoder: %s\n", opus_strerror(_error));
return;
}
_error = opus_encoder_ctl(m_oEncoder, OPUS_SET_BITRATE(VOIP_AUDIO_CODECS_OPUS_BITRATE));
if (_error < 0)
{
fprintf(stderr, "VoipAudioCodecs error: failed to set the bitrate: %s\n", opus_strerror(_error));
return;
}
m_oDecoder = opus_decoder_create(m_iSampleRate, m_iNumberOfChannels, &_error);
if (_error < 0)
{
fprintf(stderr, "VoipAudioCodecs error: failed to create a decoder: %s\n", opus_strerror(_error));
return;
}
Opus配置是:
VOIP_AUDIO_CODECS_OPUS_BITRATE is OPUS_BITRATE_MAX
//64000 //70400 //84800 //112000
VOIP_AUDIO_CODECS_OPUS_APPLICATION_TYPE is OPUS_APPLICATION_VOIP
//OPUS_APPLICATION_AUDIO
VOIP_AUDIO_CODECS_OPUS_MAX_FRAME_SIZE is 5760
//Minimum: (120ms; 5760 for 48kHz)
VOIP_AUDIO_CODECS_OPUS_BYTES_SIZE is 960
//120, 240, 480, 960, 1920, 2880
当我编码和解码时,我使用这些方法:
Encode_Opus(VoipInt16* rawSamples, int rawSamplesSize)
{
unsigned char encodedData[m_iMaxPacketSize];
VoipInt32 bytesEncoded;
int frameSize = rawSamplesSize / m_iBytesPerSample;
bytesEncoded = opus_encode(m_oEncoder, rawSamples, frameSize, encodedData, m_iMaxPacketSize);
if (bytesEncoded < 0)
{
fprintf(stderr, "VoipAudioCodecs error: encode failed: %s\n", opus_strerror(bytesEncoded));
return nullptr;
}
sVoipAudioCodecOpusEncoded* resultStruct = (sVoipAudioCodecOpusEncoded* )VoipAlloc(sizeof(sVoipAudioCodecOpusEncoded));
resultStruct->m_data = (unsigned char*)VoipAlloc(bytesEncoded);
memcpy(resultStruct->m_data, encodedData, bytesEncoded);
resultStruct->m_dataSize = bytesEncoded;
return resultStruct;
}
Decode_Opus(void* encodedSamples, VoipInt32 encodedSamplesSize)
{
VoipInt16 decodedPacket[VOIP_AUDIO_CODECS_OPUS_MAX_FRAME_SIZE];
int _frameSize = opus_decode(m_oDecoder, (const unsigned char*)encodedSamples, encodedSamplesSize, decodedPacket, VOIP_AUDIO_CODECS_OPUS_MAX_FRAME_SIZE, 0);
if (_frameSize < 0)
{
fprintf(stderr, "VoipAudioCodecs error: decoder failed: %s\n", opus_strerror(_frameSize));
return nullptr;
}
size_t frameSize = (size_t)_frameSize;
sVoipAudioCodecOpusDecoded* resultStruct = (sVoipAudioCodecOpusDecoded* )VoipAlloc(sizeof(sVoipAudioCodecOpusDecoded));
resultStruct->m_data = (VoipInt16*)VoipAlloc(frameSize * m_iBytesPerSample);
memcpy(resultStruct->m_data, decodedPacket, (frameSize * m_iBytesPerSample));
resultStruct->m_dataSize = frameSize * m_iBytesPerSample;
return resultStruct;
}
当应用程序发送数据时:
VoipUInt32 itemsForProcess = inputAudioQueue->getItemCount();
for (int tmp_queueItems = 0; tmp_queueItems < itemsForProcess; tmp_queueItems++)
{
sVoipQueue* tmp_samples = inputAudioQueue->popItem();
m_oCircularTempInputBuffer->writeDataToBuffer(tmp_samples->m_pData, tmp_samples->m_iDataSize);
while (void* tmp_buffer = m_oCircularTempInputBuffer->readDataFromBuffer(VOIP_AUDIO_CODECS_OPUS_BYTES_SIZE))
{
sVoipAudioCodecOpusEncoded* encodedSamples = Encode_Opus((VoipInt16*)tmp_buffer, VOIP_AUDIO_CODECS_OPUS_BYTES_SIZE);
//Then packeting and the real sending using tcp socket…
}
//Rest of the code…
}
这是阅读:
sVoipAudioCodecOpusDecoded* decodedSamples = Decode_Opus(inputPacket->m_pPacketData, (VoipInt32)inputPacket->m_iPacketSize);
if (decodedSamples != nullptr)
{
m_oCircularTempOutputBuffer->writeDataToBuffer(decodedSamples->m_data, decodedSamples->m_dataSize);
VoipFree((void**)&decodedSamples->m_data);
VoipFree((void**)&decodedSamples);
}
while (void* tmp_buffer = m_oCircularTempOutputBuffer->readDataFromBuffer(m_iBufferSizeBytes))
{
outputAudioQueue->pushItem(tmp_buffer, m_iBufferSizeBytes);
}
inputAudioQueue是队列中包含来自音频单元回调的记录数据。
outputAudioQueue是一个队列,用于从我的音频单元的回调中播放声音。
m_iMaxPacketSize与m_iBufferSizeBytes相同。
我的问题是:
我在想,我的计算是否正确?
如果没有,我该如何改进它们呢?
你看到代码中有任何错误吗?
当采样率设置为16000时,您是否有建议在opus_encode方法上修复崩溃错误?
提前谢谢。
PS。我用16000的采样率进行了一些测试并发现:如果我使用这个公式:frame_duration = frame_size / sample rate,如果我将frame_duration设置为preferedIOBufferDuration:
120/16000 = 0.0075 // AVAudioSession设置0.008000 - 崩溃
240/16000 = 0.015 // AVAudioSession设置0.016000 - 崩溃
480/16000 = 0.03 // AVAudioSession设置0.032000 - 崩溃
960/16000 = 0.06 // AVAudioSession设置0.064000 - 崩溃
1920/16000 = 0.12 // AVAudioSession设置0.128000 - 工作
2880/16000 = 0.18 // AVAudioSession设置0.128000 - 崩溃
然后我发现采样率16000和preferredIOBufferDuration 0.12(1920)没有编码器崩溃,其中AVAudioSession设置为0.128000。所以它只适用于这种情况。
有什么想法吗?