根据网络音频API规范 http://webaudio.github.io/web-audio-api/
我可以分配一个在源节点完成播放时运行的事件处理程序(源节点的onended
属性)。但是,如果我在音频源节点上调用stop(0)
,该事件是否已触发?这些规格似乎并不清楚。
我可以在各种浏览器上尝试这一点,但我想知道适当的标准行为。当源节点主动ended
时,stop
事件是否会触发?或者,如果音频播放完毕,ended
事件是否会触发?
答案 0 :(得分:1)
onended
类型为EventHandler,
用于为调度到AudioBufferSourceNode节点类型的已结束事件设置EventHandler(在HTML [HTML]中描述)的属性。 当完成AudioBufferSourceNode缓冲区的播放时,会将事件类型的事件(在HTML [HTML]中描述)分派给事件处理程序。
它声明它会在音频数据结束时或停止时触发。
这些线让我感到困惑:
void start (optional double when = 0, optional double offset = 0, optional double duration);
void stop (optional double when = 0);
attribute EventHandler onended;
答案 1 :(得分:1)
是的。 1}}事件在音频播放完毕或调用onended
时被触发。
stop()
var audioCtx = new(window.AudioContext || window.webkitAudioContext)();
var button = document.querySelector('button');
var stop = document.querySelector('#stop');
var source;
// Stereo
var channels = 2;
// Create an empty two second stereo buffer at the
// sample rate of the AudioContext
var frameCount = audioCtx.sampleRate * 2.0;
var myArrayBuffer = audioCtx.createBuffer(2, frameCount, audioCtx.sampleRate);
button.onclick = function () {
// Fill the buffer with white noise;
//just random values between -1.0 and 1.0
for (var channel = 0; channel < channels; channel++) {
// This gives us the actual ArrayBuffer that contains the data
var nowBuffering = myArrayBuffer.getChannelData(channel);
for (var i = 0; i < frameCount; i++) {
// Math.random() is in [0; 1.0]
// audio needs to be in [-1.0; 1.0]
nowBuffering[i] = Math.random() * 2 - 1;
}
}
// Get an AudioBufferSourceNode.
// This is the AudioNode to use when we want to play an AudioBuffer
source = audioCtx.createBufferSource();
// set the buffer in the AudioBufferSourceNode
source.buffer = myArrayBuffer;
// connect the AudioBufferSourceNode to the
// destination so we can hear the sound
source.connect(audioCtx.destination);
// start the source playing
source.start();
source.onended = function () {
alert('ended');
};
};
stop.onclick = function() {
source.stop();
};