JS从websocket播放音频块(~stream)

时间:2017-11-28 17:03:58

标签: javascript node.js html5 audio websocket

我在玩websocket,媒体API,浏览器和nodejs。我的目标是让两个成员在现场互相交谈。

理想的逻辑'

client-1_mic -> client-1_browser --> nodejs server -> client-2_browser -> client-2_speaker

我知道,WebRTC有一种方法,但我想通过websockets实现。

注意:我使用以下内容:Firefox v56 / 57浏览器,NodeJS(v8),NPM,Ubuntu服务器。

我现在的解决方案现在能够通过websockets进行通信(频道也存在),所以基本聊天。

我的服务器端(server.js)部分用于通信:

socket.on('audio', function (data) {
    if (socket.username in userTargets && userTargets[socket.username] in usersInRoom) {
        io.sockets.connected[usersInRoom[userTargets[socket.username]].socket].emit('updatechat', socket.username, 'New audio arrived');
        // send to itself
        io.sockets.connected[socket.id].emit('updatechat', socket.username, 'To ' + userTargets[socket.username] + '> here is some new audio');
        // Audio passing
        io.sockets.connected[usersInRoom[userTargets[socket.username]].socket].emit('audio', socket.username, data);
    } else {
        io.sockets.in(socket.room).emit('updatechat', socket.username, 'Some audio arrived');

        // Audio global spam @todo later add restrict agains the sender
        io.sockets.in(socket.room).emit('audio', socket.username, data);
    }
});

我的客户部分,音频采集(client.js)

media = mediaOptions.audio;
navigator.mediaDevices.getUserMedia(media.gUM).then(_stream => {
    stream = _stream;
    recorder = new MediaRecorder(stream);
    chunks = [];

    recorder.ondataavailable = e => {
        chunks.push(e.data);
        socket.emit('audio', {audioBlob: e.data}); // Sending only the audio blob
    };

    log('got media successfully');
}).catch(log);

音频检索部分(也是client.js)

socket.on('audio', function (senderUser, data) {
    // Lets try to play the audio
    try {
        //audioBlob: ArrayBuffer { byteLength: 13542 }
        //audioBlob: <Buffer 4f 67 67 53 00 00 80 3f 04 00 00 00 00 00 b8 51 43 3c 0a 00 00 00 e3 d9 48 13 20 80 81 80 81 80 81 80 81 80 81 7e 7f 78 78 7c 82 89 86 8a 81 80 81 80 ...
        audioBufferContainer.push(data.audioBlob);
        console.log('Blob chunk added!');
        /*
// missing time duration & empty space filler for audio
// missing blob to arraybuffer conversion
        audioCtx.decodeAudioData(data.audioBlob, function(myAudioBuffer) {
            audioBufferSource.buffer = myAudioBuffer;
            audioBufferSource.connect(audioCtx.audioDestination);
            console.log('Audio buffer passed to the buffer source');
            audioBufferSource.play();
        });
        */
     }
    catch(e) {
        console.error('Some error during replaying: ' + e.message);
    }
});

如果我只是简单地收集发送的音频blob(从beginin&#39; - 直到&#39; end)并且只是连接它,那么我得到了可播放的音频文件(例如:传输是正确的,没有&#39;数据丢失或损坏)。

我的问题:

- How I should handle the gathered/received audio chunks (blob)? Should I try to convert into audio array buffer then passing it to the audioContext and set the time/ ffset and play it (and maybe near it fill the gaps with empty sound)?
- Or there is any other lib/API what I can use for playing back a "stream" like this?
- Or should I just try to add to the client a <audio src="ws://<host>:<port>"> type of tag and the browser will handle then the replay? 
- Or better to add some timer to the sender side and always send full, like half second or second long audio file what is complete (i mean has the header and metadata nd the closing as well) then just repeatedly spamming like different tracks and on client side just stacking into a source queue and playing like an album?

注意:我知道有一个名为binaryJS的lib,但它没有被维护(~5y)而且这个例子不能正常工作

任何人都可以给我一些建议或暗示我没有得到或在哪里弄错了?

Ps。:我没有广泛的JS背景,只有其他语言,所以我不熟悉异步类型的结构或高级node.js解决方案

0 个答案:

没有答案