如何在javascript中精确调整鼠标事件和音频录制事件的时间?

时间:2018-10-01 16:15:04

标签: javascript events audio-recording data-synchronization

如何在时间精度方面最准确地对鼠标事件和记录的音频数据进行时间对准?

或更确切地说,是:我们如何估算performance.timing.navigationStart音频录制开始的时间偏移(相对于AudioContext/ScriptProcessor)?

我的Web应用程序收集鼠标事件,并同时记录麦克风的音频数据(使用AudioContextScriptProcessor)。

作为时间对准的虚拟测试,我单击了靠近麦克风的鼠标,以便在按下鼠标按钮(或点击触摸屏)的同时发出声音:我可以记录event.timeStamp属性(鼠标performance.timing.navigationStart事件和音频数据处理事件onmousedown的{​​{1}} see event.timeStamp in Chrome > 49)都被引用。根据事件的时间戳估计onaudioprocess事件首次触发时的音频记录开始时间。由于Chrome支持onaudioprocesssee AudioContext.baseLatency),因此我将它减去了时间戳(或者应该添加它吗?我不确定)。下面的代码显示了baseLatency的估算值。

我目前正在Chrome 69(在Windows PC六核和具有Android的ASUS四核平板电脑上)上对其进行测试。

由于@Kaiido建议使用_startRecTime事件而不是onmousedown,现在我达到了+/- 0.03秒的延迟/预期,而我的目标是实现+ /的错误-最多0.01秒。

是否有更好的方法估算onclick

这是监视鼠标单击事件和音频事件计时的最小代码。请注意,应使用https协议使音频正常工作:

_startRecTime
var myAudioPeakThreshold = 0.001;
var myInChannels = 2;
var myOutChannels = 2;
var myBitsPerSample = 16;
var mySampleRate = 48000;
var myBufferSize = 16384;
var myLatency = 0.01;

var _samplesCount = 0;
var _startRecTime = 0;

function debug(txt) {
  document.getElementById("debug").innerHTML += txt + "\r\n";
}

function onMouse(e) {
  var tClick = e.timeStamp/1000;
  debug("onMouse: " + tClick.toFixed(6));
}

function myInit() {
  // thanks to Kaiido for pointing out that in this
  // context "onmousedown" is more effective than "onclick"
  document.getElementById("clickMe").onmousedown = onMouse;
  
  debug("INFO: initialising navigator.mediaDevices.getUserMedia");
  navigator.mediaDevices.getUserMedia({
      audio: {
        channelCount: myInChannels,
        latency: myLatency,
        sampleRate: mySampleRate,
        sampleSize: myBitsPerSample
      },
      video: false
    })
  .then(
    function(stream) {
      debug("INFO: navigator.mediaDevices.getUserMedia initialised");
      var audioContext = new AudioContext;
      var audioSource = audioContext.createMediaStreamSource(stream);
      
      debug("INFO: baseLatency is: " + (audioSource.context.baseLatency ? audioSource.context.baseLatency.toFixed(3) : "unknown") + "s");
      debug("INFO: sampleRate is: " + audioSource.context.sampleRate.toFixed(0) + "Hz");
      this.node = audioSource.context.createScriptProcessor.call(
        audioSource.context,
        myBufferSize,
        myInChannels,
        myOutChannels);
      
      // audio data processing callback
      this.node.onaudioprocess = function(e) {
        var samplesCount = e.inputBuffer.getChannelData(0).length;
      
        // init timing
        if(_samplesCount == 0) {
          _startRecTime = e.timeStamp/1000 - samplesCount / audioSource.context.sampleRate;
          if(typeof audioSource.context.baseLatency !== "undefined") {
            _startRecTime -= audioSource.context.baseLatency;
          }
        }
        
        // simple peak detection
        var tPeak = 0, i = 0;
        while(i < samplesCount) {
          if(e.inputBuffer.getChannelData(0)[i] > myAudioPeakThreshold) {
            tPeak = _startRecTime + (_samplesCount + i)/audioSource.context.sampleRate;

            debug("onPeak : " + tPeak.toFixed(6));
            break;
          }
          i++;
        }
        _samplesCount += samplesCount;
      }

      // connect the node between source and destination
      audioSource.connect(this.node);
      this.node.connect(audioSource.context.destination);
      return;
    })
  .catch(
    function(e) {
      debug("ERROR: navigator.mediaDevices.getUserMedia failed");
      return;
    });
}

0 个答案:

没有答案