Phonegap混合音频文件

时间:2014-07-30 15:23:30

标签: ios cordova audio html5-audio web-audio

我正在使用Phonegap for Ios构建卡拉OK应用程序。

我在www / assets文件夹中有音频文件,我可以使用media.play()函数播放

这允许用户收听背景音轨。当媒体正在播放另一个Media实例正在录制时。

录制完成后,我需要将录音文件放在背景音轨上,我不知道如何进行录音。

我认为可能有用的一种方法是使用WEb Audio API - 我从HTML5 Rocks获取了以下代码,它将两个文件加载到AudioContext中,并允许我同时播放这两个文件。但是,我想要做的是将两个缓冲区写入单个.wav文件中。有什么办法可以将source1和source2合并到一个新文件中吗?

var context;
var bufferLoader;

function init() {
    // Fix up prefixing
    window.AudioContext = window.AudioContext || window.webkitAudioContext;
    context = new AudioContext();

    bufferLoader = new BufferLoader(
        context,
        [
            'backingTrack.wav',
            'voice.wav',
        ],
        finishedLoading
    );

    bufferLoader.load();
}

function finishedLoading(bufferList) {
    // Create two sources and play them both together.
    var source1 = context.createBufferSource();
    var source2 = context.createBufferSource();
    source1.buffer = bufferList[0];
    source2.buffer = bufferList[1];

    source1.connect(context.destination);
    source2.connect(context.destination);
    source1.start(0);
    source2.start(0);
}


function BufferLoader(context, urlList, callback) {
    this.context = context;
    this.urlList = urlList;
    this.onload = callback;
    this.bufferList = new Array();
    this.loadCount = 0;
}

BufferLoader.prototype.loadBuffer = function(url, index) {
    // Load buffer asynchronously
    var request = new XMLHttpRequest();
    request.open("GET", url, true);
    request.responseType = "arraybuffer";

    var loader = this;

    request.onload = function() {
        // Asynchronously decode the audio file data in request.response
        loader.context.decodeAudioData(
            request.response,
            function(buffer) {
                if (!buffer) {
                    alert('error decoding file data: ' + url);
                    return;
                }
                loader.bufferList[index] = buffer;
                if (++loader.loadCount == loader.urlList.length)
                    loader.onload(loader.bufferList);
            },
            function(error) {
                console.error('decodeAudioData error', error);
            }
        );
    }

    request.onerror = function() {
        alert('BufferLoader: XHR error');
    }

    request.send();
}

BufferLoader.prototype.load = function() {
    for (var i = 0; i < this.urlList.length; ++i)
        this.loadBuffer(this.urlList[i], i);
}

此解决方案中可能存在某些内容How do I convert an array of audio data into a wav file?据我所知,它们正在交错两个缓冲区并将它们编码为.wav但我无法弄清楚它们将它们写入的位置一个文件(保存新的wav文件)任何想法?

下面的答案 - 由于我使用的是Web Audio Api(javascript)而非IOS

,因此无法提供帮助

2 个答案:

答案 0 :(得分:6)

解决方案是使用offlineAudioContext

步骤如下: 1.使用BufferLoader将两个文件作为缓冲区加载 2.创建OfflineAudioContext 3.将两个缓冲区连接到OfflineAudioContext 4.启动两个缓冲区 5.使用离线startRendering功能 6.设置offfline.oncomplete函数以获取renderedBuffer的句柄。

以下是代码:

offline = new webkitOfflineAudioContext(2, voice.buffer.length, 44100);
vocalSource = offline.createBufferSource();
vocalSource.buffer = bufferList[0];
vocalSource.connect(offline.destination);

backing = offline.createBufferSource();
backing.buffer = bufferList[1];
backing.connect(offline.destination);

vocalSource.start(0);
backing.start(0);

offline.oncomplete = function(ev){
    alert(bufferList);
    playBackMix(ev);
    console.log(ev.renderedBuffer);
    sendWaveToPost(ev);
}
offline.startRendering();

答案 1 :(得分:1)

我建议直接混合使用PCM。如果初始化与两个轨道的时间帧重叠的缓冲区,则公式为加法:

  

mix(a,b)= a + b - a * b / 65535。

此公式取决于无符号16位整数。这是一个例子:

SInt16 *bufferA, SInt16 *bufferB;
NSInteger bufferLength;
SInt16 *outputBuffer;

for ( NSInteger i=0; i<bufferLength; i++ ) {
  if ( bufferA[i] < 0 && bufferB[i] < 0 ) {
    // If both samples are negative, mixed signal must have an amplitude between 
    // the lesser of A and B, and the minimum permissible negative amplitude
    outputBuffer[i] = (bufferA[i] + bufferB[i]) - ((bufferA[i] * bufferB[i])/INT16_MIN);
  } else if ( bufferA[i] > 0 && bufferB[i] > 0 ) {
    // If both samples are positive, mixed signal must have an amplitude between the greater of
    // A and B, and the maximum permissible positive amplitude
    outputBuffer[i] = (bufferA[i] + bufferB[i]) - ((bufferA[i] * bufferB[i])/INT16_MAX);
  } else {
    // If samples are on opposite sides of the 0-crossing, mixed signal should reflect 
    // that samples cancel each other out somewhat
    outputBuffer[i] = bufferA[i] + bufferB[i];
  }
}

这可以是处理带符号16位音频的非常有效的方法。 Go here for the source