从MediaRecorder块创建AudioBuffers

时间:2019-11-19 07:02:39

标签: javascript typescript audio webrtc

我正在尝试使用带有AudioBuffer的函数实时分析音频流入的过程。我将记录器提供给我的Blob转换为以下功能,但它会抛出DOMException: Unable to decode audio data

async function audioBufferFromBlob(blob: Blob, audioCtx: AudioContext): Promise<AudioBuffer> {
    return await audioCtx.decodeAudioData(await new Response(blob).arrayBuffer());
}

我认为这是由于事实,即数据可用blob上的刻录机不是完整的音频流,而是块。有没有办法将Blob转换为完整的音频流?

完整代码:

import PitchFinder = require('pitchfinder');

const amdf = PitchFinder.AMDF();

function logPitch(pitch: [number, number], elem: HTMLElement) {
    elem.innerText = pitch.toString();
}

function detectPitch(audioBuffer: AudioBuffer): [number, number]{
    let frequency = amdf(audioBuffer.getChannelData(0));
    return [frequency, noteFromFrequency(frequency)]
}
async function audioBufferFromBlob(blob: Blob, audioCtx: AudioContext): Promise<AudioBuffer> {
    return await audioCtx.decodeAudioData(await new Response(blob).arrayBuffer());
}

function noteFromFrequency(frequency: number ): number {
    let noteNum = 12 * (Math.log( frequency / 440 )/Math.log(2) );
    return Math.round( noteNum ) + 69;
}

export function displayPitch(displayElem: HTMLElement) {
    let audioCtx = new AudioContext();
    navigator.getUserMedia({audio: true, video: false}, successCallback, errorCallback);

    function errorCallback() {
        alert("Something went wrong.");
    }
    function successCallback(stream: MediaStream) {
        let recorder = new MediaRecorder(stream, { mimeType: 'audio/webm'});
        recorder.ondataavailable = async (event: BlobEvent) => {
            let buffer = await audioBufferFromBlob(event.data, audioCtx);
            logPitch(detectPitch(buffer), displayElem);
        };
        recorder.start(500);
    }
}

0 个答案:

没有答案