音频混合算法改变音量

时间:2015-02-26 15:31:56

标签: c++ algorithm audio

我正在尝试使用以下算法混合一些音频样本:

short* FilterGenerator::mixSources(std::vector<RawData>rawsources, int numframes)
{
short* output = new short[numframes * 2]; // multiply 2 for channels

for (int sample = 0; sample < numframes * 2; ++sample)
{
    for (int sourceCount = 0; sourceCount < rawsources.size(); ++sourceCount)
    {
        if (sample <= rawsources.at(sourceCount).frames * 2)
        {
            short outputSample = rawsources.at(sourceCount).data[sample];
            output[sample] += outputSample;
        }
    }
}

// post mixing volume compression
for (int sample = 0; sample < numframes; ++sample)
{
    output[sample] /= (float)rawsources.size();
}

return output;
}

我得到了我想要的输出,除了当其中一个源完成时,其他源开始播放声音。我知道为什么会这样,但我不知道如何妥善解决。

此外,这是来自音频I输出的Audacity的屏幕截图: Audacity Screenshot

正如你所看到的,肯定有些不对劲。您可以看到音频在中心不再为零,一旦其中一个音源播放完毕,您就可以看到音频响亮。

最重要的是我想解决音量问题,但我能做的任何其他调整都非常感谢!

一些额外的信息:我知道这段代码不允许单声道来源,但没关系。我只会使用立体声交错音频样本。

3 个答案:

答案 0 :(得分:1)

通常混合不要除以来源数量。这意味着将正常轨道与静音轨道混合可以使其幅度减半。如果你想要,你最终可以规范轨道,使其在他的范围内。

代码未经过测试,可能存在错误:

#include <algorithm> // for std::max 
#include <cmath>     // for std::fabs

short* FilterGenerator::mixSources(std::vector<RawData>rawsources, int numframes)
{
  // We can not use shorts immediately because can overflow
  // I use floats because in the renormalization not have distortions
  float *outputFloating = new float [numframes * 2];

  // The maximum of the absolute value of the signal 
  float maximumOutput = 0;

  for (int sample = 0; sample < numframes * 2; ++sample)
  {
      // makes sure that at the beginning is zero
      outputFloating[sample] = 0;

      for (int sourceCount = 0; sourceCount < rawsources.size(); ++sourceCount)
      {
          // I think that should be a '<'
          if (sample < rawsources.at(sourceCount).frames * 2)
              outputFloating[sample] += rawsources.at(sourceCount).data[sample];  
      }

      // Calculates the maximum
      maximumOutput = std::max (maximumOutput, std::fabs(outputFloating[sample]));
  }  

  // A short buffer
  short* output = new short [numframes * 2]; // multiply 2 for channels

  float multiplier = maximumOutput > 32767 ? 32767 / maximumOutput : 1;

  // Renormalize the track
  for (int sample = 0; sample < numframes * 2; ++sample)
      output[sample] = (short) (outputFloating[sample] * multiplier); 

  delete[] outputFloating;
  return output;
}

答案 1 :(得分:0)

由于您在分割之前将所有内容添加到short,因此您可能会溢出。您需要添加一个更大的中间人。最后的缩放也不应该取决于样本的数量,它应该是一个常数 - 在调用你的函数之前确定它

short* FilterGenerator::mixSources(std::vector<RawData>rawsources, int numframes, double gain = 0.5)
{
    short* output = new short[numframes * 2]; // multiply 2 for channels

    for (int sample = 0; sample < numframes * 2; ++sample)
    {
        long newSample = 0;
        for (int sourceCount = 0; sourceCount < rawsources.size(); ++sourceCount)
        {
            if (sample <= rawsources.at(sourceCount).frames * 2)
            {
                short outputSample = rawsources.at(sourceCount).data[sample];
                newSample += outputSample;
            }
        }
        output[sample] = (short)(newSample * gain);
    }

return output;
}

答案 2 :(得分:0)

你不必真正做混合音量压缩&#34;。只需将所有来源加起来,不要让总和溢出。这应该有效:

short* FilterGenerator::mixSources(std::vector<RawData>rawsources, int numframes)
{
short* output = new short[numframes * 2]; // multiply 2 for channels

for (int sample = 0; sample < numframes * 2; ++sample)
{
    long sum = 0;
    for (int sourceCount = 0; sourceCount < rawsources.size(); ++sourceCount)
    {
        if (sample < rawsources.at(sourceCount).frames * 2)
        {
            short outputSample = rawsources.at(sourceCount).data[sample];
            sum += outputSample;
            output[sample] += outputSample;
        }
        if (sum > 32767) sum = 32767;
        if (sum < -32768) sum = -32768;
        output[sample] = (short)sum; 
    }
}

return output;
}