我开发的VoIP应用程序以48 kHz的采样率运行。由于它使用内部使用48 kHz的Opus作为其编解码器,而且目前最新的Android硬件本身运行频率为48 kHz,因此AEC是我现在所缺少的唯一一个难题。我已经找到了WebRTC的实现,但我似乎无法弄清楚如何使其工作。看起来它会随机破坏内存并且迟早会崩溃整个事情。当它没有崩溃时,声音有点粗糙,好像它的半帧一样安静。这是我处理20毫秒帧的代码:
webrtc::SplittingFilter* splittingFilter;
webrtc::IFChannelBuffer* bufferIn;
webrtc::IFChannelBuffer* bufferOut;
webrtc::IFChannelBuffer* bufferOut2;
// ...
splittingFilter=new webrtc::SplittingFilter(1, 3, 960);
bufferIn=new webrtc::IFChannelBuffer(960, 1, 1);
bufferOut=new webrtc::IFChannelBuffer(960, 1, 3);
bufferOut2=new webrtc::IFChannelBuffer(960, 1, 3);
// ...
int16_t* samples=(int16_t*)data;
float* fsamples[3];
float* foutput[3];
int i;
float* fbuf=bufferIn->fbuf()->bands(0)[0];
// convert the data from 16-bit PCM into float
for(i=0;i<960;i++){
fbuf[i]=samples[i]/(float)32767;
}
// split it into three "bands" that the AEC needs and for some reason can't do itself
splittingFilter->Analysis(bufferIn, bufferOut);
// split the frame into 6 consecutive 160-sample blocks and perform AEC on them
for(i=0;i<6;i++){
fsamples[0]=&bufferOut->fbuf()->bands(0)[0][160*i];
fsamples[1]=&bufferOut->fbuf()->bands(0)[1][160*i];
fsamples[2]=&bufferOut->fbuf()->bands(0)[2][160*i];
foutput[0]=&bufferOut2->fbuf()->bands(0)[0][160*i];
foutput[1]=&bufferOut2->fbuf()->bands(0)[1][160*i];
foutput[2]=&bufferOut2->fbuf()->bands(0)[2][160*i];
int32_t res=WebRtcAec_Process(aecState, (const float* const*) fsamples, 3, foutput, 160, 20, 0);
}
// put the "bands" back together
splittingFilter->Synthesis(bufferOut2, bufferIn);
// convert the processed data back into 16-bit PCM
for(i=0;i<960;i++){
samples[i]=(int16_t) (CLAMP(fbuf[i], -1, 1)*32767);
}
如果我注释掉实际的回声消除并且只是来回进行浮点转换和波段分割,它不会破坏内存,听起来不会很奇怪并且无限期地运行。 (我确实将f / /扬声器信号传递给了AEC,我只是想通过将其包含在问题中来解决我的代码混乱)
我也尝试过Android内置的AEC。虽然它确实有效,但它会对16 kHz的捕获信号进行上采样。
答案 0 :(得分:0)
不幸的是,没有免费的AEC套餐支持48khz。因此,要么转移到32khz,要么使用48khz的商用AEC包。