尝试用tarsosdsp audioEvent-Buffer

时间:2016-12-19 18:43:00

标签: java audio-recording tarsosdsp

我正在开发一个简单的Beatbox-Application。首先,我用纯Java编写了所有内容,然后我找到了梦幻般的tarsosdsp框架。但现在我遇到了一个我无法解决的问题。你能救我吗?

我正在设置一个SilenceDetector - 效果很好。然后我想用process-method中的audioEvent数据填充byte []缓冲区。我失败了...变量audioBuffer属于ByteArrayOutputStream类型,并在运行时重用。请参阅相关代码段:

    private void setNewMixer(Mixer mixer) throws LineUnavailableException,
UnsupportedAudioFileException {

    if(dispatcher!= null){
        dispatcher.stop();
    }
    currentMixer = mixer;

    //final AudioFormat format = new AudioFormat(sampleRate, frameRate, channel, true, true);
    final DataLine.Info dataLineInfo = new DataLine.Info(TargetDataLine.class, audioFormat);
    final TargetDataLine line;
    line = (TargetDataLine) mixer.getLine(dataLineInfo);
    final int numberOfSamples = bufferSize;
    line.open(audioFormat, numberOfSamples);
    line.start();
    final AudioInputStream stream = new AudioInputStream(line);

    JVMAudioInputStream audioStream = new JVMAudioInputStream(stream);
    // create a new dispatcher
    dispatcher = new AudioDispatcher(audioStream, bufferSize, overlap);

    // add a processor, handle percussion event.
    silenceDetector = new SilenceDetector(threshold,false);

    dispatcher.addAudioProcessor(bufferFiller);
    dispatcher.addAudioProcessor(silenceDetector);
    dispatcher.addAudioProcessor(this);

    // run the dispatcher (on a new thread).
    new Thread(dispatcher,"GunNoiseDetector Thread").start();

}

final AudioProcessor bufferFiller = new AudioProcessor() {

    @Override
    public boolean process(AudioEvent audioEvent) {

        if(isAdjusting){        

                byte[] bb = audioEvent.getByteBuffer().clone();

                try {
                    audioBuffer.write(bb);
                } catch (IOException e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                }

                System.out.println("current buffer.size():: "+audioBuffer.size());

        } 
        else {          
            if (audioBuffer.size() > 0) {
                try {
                    byte[] ba = audioBuffer.toByteArray();
                    samples.add(ba);
                    System.out.println("stored: "+ba.length);
                    audioBuffer.flush();
                    audioBuffer.close();
                    audioBuffer = new ByteArrayOutputStream();
                } catch (IOException e) {
                    e.printStackTrace();
                }
            }           
        }

        return true;
    }

    @Override
    public void processingFinished() {
        // TODO Auto-generated method stub
    }

}; 

@Override
public boolean process(AudioEvent audioEvent) {
    if(silenceDetector.currentSPL() > threshold){           
        isAdjusting = true;
        lastAction = System.currentTimeMillis();                        
    } 
    else {                  
        isAdjusting = false;            
    }

    return true;

}

有什么建议吗?

1 个答案:

答案 0 :(得分:0)

我找到了它无法工作的原因!就像这里提到的那样:What is the meaning of frame rate in AudioFormat?

  

对于PCM,A律和μ律数据,帧是属于一个采样间隔的所有数据。这意味着帧速率与采样率相同。

所以我的AudioFormat错了!