我们需要将实时音频(从医疗设备)流式传输到网络浏览器,端到端延迟不超过3-5秒(假设网络延迟时间不超过200毫秒)。今天我们使用浏览器插件(NPAPI)进行解码,过滤(高,低,带)和播放音频流>通过Web套接字传送。
我们想要替换插件。
我正在查看各种Web Audio API demos,我们所需的大多数功能(播放,增益控制,过滤)似乎都可以在Web Audio API中找到。但是,我不清楚Web Audio API是否可用于流式传输源,因为大多数Web Audio API都使用短音和/或音频剪辑。
可以使用Web Audio API播放实时流式传输音频吗?
更新(2015年2月11日):
经过一些研究和本地原型设计后,我不确定使用Web Audio API实时音频流是否可行。由于Web Audio API的 decodeAudioData 并非真正设计用于处理音频数据的随机块(在我们的例子中通过WebSockets提供)。它似乎需要整个文件'为了正确处理它。
请参阅stackoverflow:
现在可以使用 createMediaElementSource 将<audio>
元素连接到Web Audio API,但根据我的经验,<audio>
元素会导致大量的结束到结束延迟(15-30s)并且似乎没有任何方法可以将延迟减少到3-5秒以下。
我认为唯一的解决方案是将WebRTC与Web Aduio API一起使用。我希望避免使用WebRTC,因为它需要对我们的服务器端实现进行重大更改。
更新(2015年2月12日)第一部分:
我还没有完全消除<audio>
标签(需要完成我的原型)。一旦我排除了它,我怀疑createScriptProcessor(已弃用但仍然支持)将是我们环境的一个很好的选择,因为我可以“流”#9; (通过WebSockets)我们的ADPCM数据到浏览器,然后(在JavaScript中)将其转换为PCM。类似于Scott的库(见下文)使用createScriptProcessor。这种方法并不要求数据具有适当大小的数据块。和关键时序作为decodeAudioData方法。
更新(2015年2月12日)第二部分:
经过更多测试后,我删除了<audio>
到Web Audio API接口,因为根据源类型,压缩和浏览器,端到端延迟可能是3-30秒。这留下了createScriptProcessor方法(参见下面的Scott帖子)或WebRTC。在与我们的决策者讨论后,我们决定采用WebRTC方法。我假设它会起作用。但它需要更改我们的服务器端代码。
我要标记第一个答案,就这样问题&#39;已关闭。
感谢收听。随意添加评论。
答案 0 :(得分:7)
是的,Web Audio API(以及AJAX或Websockets)可用于流式传输。
基本上,你下拉(或者在Websockets的情况下发送)一些n
长度的块。然后使用Web Audio API对它们进行解码,并将它们排队等待一个接一个地播放。
由于Web Audio API具有高精度计时功能,因此您无法听到任何&#34;接缝&#34;如果正确进行调度,则在每个缓冲区的播放之间。
答案 1 :(得分:4)
我编写了一个流式Web Audio API系统,我使用Web工作人员进行所有Web套接字管理,以便与node.js进行通信,以便浏览器线程简单地呈现音频...在笔记本电脑上工作得很好,因为移动设备落后了关于他们在网络工作者中实施网络套接字,你需要不少于棒棒糖,因为它以编码方式运行...我发布了full source code here
答案 2 :(得分:1)
您必须创建一个新的 AudioBuffer和 AudioBufferSourceNode 两者(或至少是后者) 每个要缓冲的数据...我尝试循环相同的AudioBuffer,但是一旦在AudioContext上设置了.audioBuffer
,您对AudioBuffer所做的任何修改变得无关紧要。
(注意:这些类具有您也应查看的基类/父类(在文档中已引用)。)
这是我正在工作的初步解决方案(原谅我在花了几个小时才开始工作后,不想发表任何评论),并且效果很好:
class MasterOutput {
constructor(computeSamplesCallback) {
this.computeSamplesCallback = computeSamplesCallback.bind(this);
this.onComputeTimeoutBound = this.onComputeTimeout.bind(this);
this.audioContext = new AudioContext();
this.sampleRate = this.audioContext.sampleRate;
this.channelCount = 2;
this.totalBufferDuration = 5;
this.computeDuration = 1;
this.bufferDelayDuration = 0.1;
this.totalSamplesCount = this.totalBufferDuration * this.sampleRate;
this.computeDurationMS = this.computeDuration * 1000.0;
this.computeSamplesCount = this.computeDuration * this.sampleRate;
this.buffersToKeep = Math.ceil((this.totalBufferDuration + 2.0 * this.bufferDelayDuration) /
this.computeDuration);
this.audioBufferSources = [];
this.computeSamplesTimeout = null;
}
startPlaying() {
if (this.audioBufferSources.length > 0) {
this.stopPlaying();
}
//Start computing indefinitely, from the beginning.
let audioContextTimestamp = this.audioContext.getOutputTimestamp();
this.audioContextStartOffset = audioContextTimestamp.contextTime;
this.lastTimeoutTime = audioContextTimestamp.performanceTime;
for (this.currentBufferTime = 0.0; this.currentBufferTime < this.totalBufferDuration;
this.currentBufferTime += this.computeDuration) {
this.bufferNext();
}
this.onComputeTimeoutBound();
}
onComputeTimeout() {
this.bufferNext();
this.currentBufferTime += this.computeDuration;
//Readjust the next timeout to have a consistent interval, regardless of computation time.
let nextTimeoutDuration = 2.0 * this.computeDurationMS - (performance.now() - this.lastTimeoutTime) - 1;
this.lastTimeoutTime = performance.now();
this.computeSamplesTimeout = setTimeout(this.onComputeTimeoutBound, nextTimeoutDuration);
}
bufferNext() {
this.currentSamplesOffset = this.currentBufferTime * this.sampleRate;
//Create an audio buffer, which will contain the audio data.
this.audioBuffer = this.audioContext.createBuffer(this.channelCount, this.computeSamplesCount,
this.sampleRate);
//Get the audio channels, which are float arrays representing each individual channel for the buffer.
this.channels = [];
for (let channelIndex = 0; channelIndex < this.channelCount; ++channelIndex) {
this.channels.push(this.audioBuffer.getChannelData(channelIndex));
}
//Compute the samples.
this.computeSamplesCallback();
//Creates a lightweight audio buffer source which can be used to play the audio data. Note: This can only be
//started once...
let audioBufferSource = this.audioContext.createBufferSource();
//Set the audio buffer.
audioBufferSource.buffer = this.audioBuffer;
//Connect it to the output.
audioBufferSource.connect(this.audioContext.destination);
//Start playing when the audio buffer is due.
audioBufferSource.start(this.audioContextStartOffset + this.currentBufferTime + this.bufferDelayDuration);
while (this.audioBufferSources.length >= this.buffersToKeep) {
this.audioBufferSources.shift();
}
this.audioBufferSources.push(audioBufferSource);
}
stopPlaying() {
if (this.audioBufferSources.length > 0) {
for (let audioBufferSource of this.audioBufferSources) {
audioBufferSource.stop();
}
this.audioBufferSources = [];
clearInterval(this.computeSamplesTimeout);
this.computeSamplesTimeout = null;
}
}
}
window.onload = function() {
let masterOutput = new MasterOutput(function() {
//Populate the audio buffer with audio data.
let currentSeconds;
let frequency = 220.0;
for (let sampleIndex = 0; sampleIndex <= this.computeSamplesCount; ++sampleIndex) {
currentSeconds = (sampleIndex + this.currentSamplesOffset) / this.sampleRate;
//For a sine wave.
this.channels[0][sampleIndex] = 0.005 * Math.sin(currentSeconds * 2.0 * Math.PI * frequency);
//Copy the right channel from the left channel.
this.channels[1][sampleIndex] = this.channels[0][sampleIndex];
}
});
masterOutput.startPlaying();
};
一些细节:
MasterOutput
,并以此方式同时玩多个游戏;不过,您可能要提取AudioContext
,然后在所有代码中共享1。AudioContext
(对我来说是48000)。stop()
填充)时,它们shift()
被从列表中删除。audioContextTimestamp
,这一点很重要。 contextTime
属性让我知道确切的音频何时开始(每次),然后我可以稍后调用this.audioContextStartOffset
时使用该时间(audioBufferSource.start()
),以便计时每个要在确切时间播放的音频缓冲区。编辑:是的,我是对的(在评论中)!您可以根据需要重新使用过期的AudioBuffer
。在许多情况下,这将是更“正确”的处理方式。
以下是必须更改的部分代码:
...
this.audioBufferDatas = [];
this.expiredAudioBuffers = [];
...
}
startPlaying() {
if (this.audioBufferDatas.length > 0) {
...
bufferNext() {
...
//Create/Reuse an audio buffer, which will contain the audio data.
if (this.expiredAudioBuffers.length > 0) {
//console.log('Reuse');
this.audioBuffer = this.expiredAudioBuffers.shift();
} else {
//console.log('Create');
this.audioBuffer = this.audioContext.createBuffer(this.channelCount, this.computeSamplesCount,
this.sampleRate);
}
...
while (this.audioBufferDatas.length >= this.buffersToKeep) {
this.expiredAudioBuffers.push(this.audioBufferDatas.shift().buffer);
}
this.audioBufferDatas.push({
source: audioBufferSource,
buffer: this.audioBuffer
});
}
stopPlaying() {
if (this.audioBufferDatas.length > 0) {
for (let audioBufferData of this.audioBufferDatas) {
audioBufferData.source.stop();
this.expiredAudioBuffers.push(audioBufferData.buffer);
}
this.audioBufferDatas = [];
...
这是我的起始代码,如果您想要更简单的方法,并且不需要实时音频流:
window.onload = function() {
const audioContext = new AudioContext();
const channelCount = 2;
const bufferDurationS = 5;
//Create an audio buffer, which will contain the audio data.
let audioBuffer = audioContext.createBuffer(channelCount, bufferDurationS * audioContext.sampleRate,
audioContext.sampleRate);
//Get the audio channels, which are float arrays representing each individual channel for the buffer.
let channels = [];
for (let channelIndex = 0; channelIndex < channelCount; ++channelIndex) {
channels.push(audioBuffer.getChannelData(channelIndex));
}
//Populate the audio buffer with audio data.
for (let sampleIndex = 0; sampleIndex < audioBuffer.length; ++sampleIndex) {
channels[0][sampleIndex] = Math.sin(sampleIndex * 0.01);
channels[1][sampleIndex] = channels[0][sampleIndex];
}
//Creates a lightweight audio buffer source which can be used to play the audio data.
let audioBufferSource = audioContext.createBufferSource();
audioBufferSource.buffer = audioBuffer;
audioBufferSource.connect(audioContext.destination);
audioBufferSource.start();
};
不幸的是,此^特定代码对实时音频不利,因为它仅使用1个AudioBuffer
和AudioBufferSourceNode
,而且就像我说的那样,打开循环并不能让您对其进行修改...但是,如果您只想播放正弦波5秒钟然后停止(或loop(设置为true
,然后完成)),就可以了。
答案 3 :(得分:0)
详细说明如何通过每次移动最新的缓冲区来播放存储在数组中的一堆独立缓冲区的注释:
如果通过createBufferSource()
创建缓冲区,则它会有一个onended
事件,您可以附加回调,当缓冲区到达结束时将触发该回调。你可以做这样的事情来一个接一个地播放数组中的各个块:
function play() {
//end of stream has been reached
if (audiobuffer.length === 0) { return; }
let source = context.createBufferSource();
//get the latest buffer that should play next
source.buffer = audiobuffer.shift();
source.connect(context.destination);
//add this function as a callback to play next buffer
//when current buffer has reached its end
source.onended = play;
source.start();
}
希望有所帮助。我还在尝试如何让这一切顺利和完善,但这是一个良好的开端,在许多在线帖子中都缺失。