WebRTC MediaStream的麦克风活动级别

时间:2013-05-23 21:43:59

标签: javascript audio microphone webrtc getusermedia

我想了解如何最好地获取Chrome / Canary中音频MediaStreamTrack javascript对象的麦克风活动级别。 MediaStreamTrack对象是MediaStream返回的getUserMedia的音轨,是WebRTC javascript API的一部分。

3 个答案:

答案 0 :(得分:19)

当麦克风有音频时,绿色条上下非常好:

enter image description here

<script type="text/javascript">
navigator.webkitGetUserMedia({audio:true, video:true}, function(stream){
    // audioContext = new webkitAudioContext(); deprecated  OLD!!
    audioContext = new AudioContext(); // NEW!!
    analyser = audioContext.createAnalyser();
    microphone = audioContext.createMediaStreamSource(stream);
    javascriptNode = audioContext.createJavaScriptNode(2048, 1, 1);

    analyser.smoothingTimeConstant = 0.3;
    analyser.fftSize = 1024;

    microphone.connect(analyser);
    analyser.connect(javascriptNode);
    javascriptNode.connect(audioContext.destination);

    //canvasContext = $("#canvas")[0].getContext("2d");
    canvasContext = document.getElementById("test");
    canvasContext= canvasContext.getContext("2d");

    javascriptNode.onaudioprocess = function() {
        var array =  new Uint8Array(analyser.frequencyBinCount);
        analyser.getByteFrequencyData(array);
        var values = 0;

        var length = array.length;
        for (var i = 0; i < length; i++) {
            values += array[i];
        }

        var average = values / length;
        canvasContext.clearRect(0, 0, 60, 130);
        canvasContext.fillStyle = '#00ff00';
        canvasContext.fillRect(0,130-average,25,130);
    }

}  
);
</script>
<canvas id="test" style="background-color: black;"></canvas>

答案 1 :(得分:7)

您要找的是webkitAudioContext及其createMediaStreamSource方法。

这是一个代码示例,它绘制绿色条以充当VU表:

navigator.webkitGetUserMedia({audio:true, video:true}, function(stream){
    audioContext = new webkitAudioContext();
    analyser = audioContext.createAnalyser();
    microphone = audioContext.createMediaStreamSource(stream);
    javascriptNode = audioContext.createJavaScriptNode(2048, 1, 1);

    analyser.smoothingTimeConstant = 0.3;
    analyser.fftSize = 1024;

    microphone.connect(analyser);
    analyser.connect(javascriptNode);
    javascriptNode.connect(audioContext.destination);

    canvasContext = $("#canvas")[0].getContext("2d");

    javascriptNode.onaudioprocess = function() {
        var array =  new Uint8Array(analyser.frequencyBinCount);
        analyser.getByteFrequencyData(array);
        var values = 0;

        var length = array.length;
        for (var i = 0; i < length; i++) {
            values += array[i];
        }

        var average = values / length;
        canvasContext.clearRect(0, 0, 60, 130);
        canvasContext.fillStyle = '#00ff00';
        canvasContext.fillRect(0,130-average,25,130);
    }

}

More details about AudioContext

答案 2 :(得分:4)

更新:使用以下代码修改代码:

navigator.mediaDevices.getUserMedia(constraints).then(
    function(stream){
        // code ... 
    }).catch(function(err) {
        // code ... 
});

这是一个小提琴:https://jsfiddle.net/elshnkhll/p07e5vcq/

相关问题