如何使用二进制数组WebSocket创建TargetDataLine?

时间:2015-10-11 04:13:30

标签: java websocket bytearray audio-processing

我创建了一个字节数组WebSocket,它可以从客户端的麦克风(navigator.getUserMedia)实时接收音频块。我已经将此流记录到服务器中的WAV文件中,经过一段时间WebSocket停止接收新的字节数组。以下代码表示当前的情况。

的WebSocket

@OnMessage
public void message(byte[] b) throws IOException{
    if(byteOutputStream == null) {
        byteOutputStream = new ByteArrayOutputStream();
        byteOutputStream.write(b);
    } else {
        byteOutputStream.write(b);
    }
}

存储WAV文件的线程

public void store(){
    byte b[] = byteOutputStream.toByteArray();
    try {
        AudioFormat audioFormat = new AudioFormat(44100, 16, 1, true, true);
        ByteArrayInputStream byteStream = new ByteArrayInputStream(b);
        AudioInputStream audioStream = new AudioInputStream(byteStream, audioFormat, b.length);
        DateTime date = new DateTime();
        File file = new File("/tmp/"+date.getMillis()+ ".wav");
        AudioSystem.write(audioStream, AudioFileFormat.Type.WAVE, file);
        audioStream.close();
    } catch (IOException e) {
        e.printStackTrace();
    }
}

但是,不是记录WAV文件,我使用此WebSocket的目标是使用YIN pitch detection algorithm库上实现的TarsosDSP实时处理音频。换句话说,这基本上是执行PitchDetectorExample,但是使用来自WebSocket而不是默认音频设备(OS麦克风)的数据。以下代码表示PitchDetectorExample当前如何使用操作系统提供的麦克风线初始化实时音频处理。

private void setNewMixer(Mixer mixer) throws LineUnavailableException, UnsupportedAudioFileException {      
    if(dispatcher!= null){
        dispatcher.stop();
    }
    currentMixer = mixer;
    float sampleRate = 44100;
    int bufferSize = 1024;
    int overlap = 0;
    final AudioFormat format = new AudioFormat(sampleRate, 16, 1, true, true);
    final DataLine.Info dataLineInfo = new DataLine.Info(TargetDataLine.class, format);
    TargetDataLine line;
    line = (TargetDataLine) mixer.getLine(dataLineInfo);
    final int numberOfSamples = bufferSize;
    line.open(format, numberOfSamples);
    line.start();
    final AudioInputStream stream = new AudioInputStream(line);
    JVMAudioInputStream audioStream = new JVMAudioInputStream(stream);
    // create a new dispatcher
    dispatcher = new AudioDispatcher(audioStream, bufferSize, overlap);
    // add a processor
    dispatcher.addAudioProcessor(new PitchProcessor(algo, sampleRate, bufferSize, this));
    new Thread(dispatcher,"Audio dispatching").start();
}

有一种方法可以将WebSocket数据作为TargetDataLine处理,因此可以将其与AudioDispatcherPitchProcessor连接起来吗?不知何故,我需要将从WebSocket接收的字节数组发送到音频处理线程。

欢迎达成这一目标的另一个想法。谢谢!

1 个答案:

答案 0 :(得分:1)

我不确定您是否需要audioDispatcher。如果你知道字节是如何编码的(PCM,16bits le mono?)那么你可以实时将它们转换为浮点并将它们提供给pitchdetector算法,在你的websocket中你可以做这样的事情(并忘记输入流)和audiodispatcher):

 int index;
 byte[] buffer = new byte[2048];
 float[] floatBuffer = new float[1024];
 FastYin detector = new FastYin(44100,1024);
 public void message(byte[] b){
   for(int i = 0 ; i < b.length; i++){
     buffer[index] = b[i];
     index++
     if(index==2048){
       AudioFloatConverter converter = AudioFloatConverter.getConverter(new Format(16bits, little endian, mono,...));
       //converts the byte buffer to float
       converter.toFloatArray(buffer,floatBuffer);
       float pitch = detector.getPitch(floatBuffer);
       //here you have your pitch info that you can use
       index = 0;
     }
   }

您需要观察已经传递的字节数:由于两个字节代表一个浮点数(如果使用16位pcm编码),您需要以偶数字节开始。字节序和采样率也很重要。

此致

净莲