我的代码:
// Create an AudioContext instance for this sound
var audioContext = new (window.AudioContext || window.webkitAudioContext)();
var maxChannelCount = audioContext.destination.maxChannelCount;
var gainNode = audioContext.createGain()
audioContext.destination.channelCount = maxChannelCount;
var merger = audioContext.createChannelMerger(maxChannelCount);
merger.connect(audioContext.destination);
gainNode.gain.value = 0.1 // 10 %
gainNode.connect(audioContext.destination)
// Create a buffer for the incoming sound content
var source = audioContext.createBufferSource();
// Create the XHR which will grab the audio contents
var request = new XMLHttpRequest();
// Set the audio file src here
request.open('GET', 'phonemes/bad-bouyed/bad.mp3', true);
// Setting the responseType to arraybuffer sets up the audio decoding
request.responseType = 'arraybuffer';
request.onload = function() {
// Decode the audio once the require is complete
audioContext.decodeAudioData(request.response, function(buffer) {
source.buffer = buffer;
// Connect the audio to source (Can I also set the gain within this declaration?)
source.connect(merger, 0,10);
// Simple setting for the buffer
source.loop = false;
// Play the sound!
source.start(0);
}, function(e) {
console.log('Audio error! ', e);
});
}
// Send the request which kicks off
request.send();
上面的代码加载了一个mp3文件,并使用Web Audio API播放了该文件,尽管我只是想知道是否可以使用我已经存在的代码来设置源的增益并设置输出通道,但它的工作效果很好。您可以在上面看到我已经创建了一个增益节点并连接了音频上下文。指定两者的语法是什么?我是否按照希望执行它们的顺序声明它们?
例如
source.connect(合并,0,10);
source.connect(gainNode);
任何对此的帮助都会很棒。