我正在尝试通过AudioWorklet获取块,如cwilso在以下链接中建议的那样:Using web audio api for analyzing input from microphone (convert MediaStreamSource to BufferSource),但不幸的是我没有使其运行。有人知道吗,我如何使用AudioWorker从流中获取块,以便我可以对其进行分析?这是我的代码:
navigator.mediaDevices.getUserMedia({ audio: true, video: false })
.then(function(stream) {
/* use the stream */
var audioCtx = new (window.AudioContext || window.webkitAudioContext)(); // define audio context
var source = audioCtx.createMediaStreamSource(stream);
//connect stream to a web audio context with a MediaStreamAudioNode
source.connect(audioCtx);
//Use an AudioWorklet to grab the bits and do detection
//https://developers.google.com/web/updates/2017/12/audio-worklet
//https://developers.google.com/web/updates/2018/06/audio-worklet-design-pattern
//https://webaudio.github.io/web-audio-api/#audioworklet
audioCtx.audioWorklet.addModule('AudioWorklet.js').then(() => {
let bypassNode = new AudioWorkletNode(audioCtx, 'bypass-processor');
});
//collect chunks for beat detection
//do BPM detection
})
.catch(function(err) {
/* handle the error */
alert("Error");
});
// Script in an extra file like it is explained in the api
class BypassProcessor extends AudioWorkletProcessor {
process (inputs, outputs) {
// Single input, single channel.
let input = inputs[0];
let output = outputs[0];
output[0].set(input[0]);
// Process only while there are inputs.
alert(input);
alert(output);
return false;}}); registerProcessor('bypass-processor', BypassProcessor);
答案 0 :(得分:0)
取决于您需要执行多少处理,您可以在AudioWorkletNode
本身中进行处理。
如果不是,则需要使用MessagePort
将数据从AudioWorkletNode传递到主线程。您可能还对MessagePort with AudioWorklet和AudioWorklet with SharedArrayBuffer and Worker
感兴趣