JVST输入/输出数据的浮点转换

时间:2014-02-12 00:28:28

标签: java audio floating-point vst

我正在尝试通过JVST使用VST插件加载来处理音频。 粗略地说,我正在做的是:

1 open an audio input stream that takes a wav file
2 until the file is not finished
  2.1 read a block of frames and store them as byte[]
  2.2 convert byte[] to float[]
  2.3 process the float[] with a JVST call to the VST plugin
  2.4 convert float[] to byte[]
  2.5 push byte[] in the audio output stream

现在发生的事情是,如果我注释掉2.3,音频将从字节转换为浮动和返回,并且听起来很完美。如果我改为执行VST处理,则会产生边缘白噪声。我真的不知道如何继续。我的直觉是byte []到float []转换可能有问题,但我不知道是什么。我试过更改字节的endian,没用。 有人有建议吗? 这是实际的代码文件:

public class ByteConv
{
    public static void main(String[] args) throws Exception {

        AEffect effect = VST.load("G:/AnalogDelay");
        // Startup the plugin
        // Ask the plugin to display its GUI using the SWT window handle
        MiniHost miniHost = new MiniHost(effect);
//        miniHost.setBlockOnOpen(true);
//        miniHost.open();
        effect.open();
        effect.setSampleRate(44100.0f);
        effect.setBlockSize(512);

        File file = new File("C:\\Users\\Laimon\\Desktop\\wma-01.wav");

        try {
            AudioFormat format = AudioSystem.getAudioFileFormat(file).getFormat();
            System.out.println(format.toString());
            AudioInputStream inputStream = AudioSystem.getAudioInputStream(file);
            SourceDataLine sourceLine = AudioSystem.getSourceDataLine(format);
            sourceLine.open();
            sourceLine.start();

            int bytesPerFrame = inputStream.getFormat().getFrameSize();
            if (bytesPerFrame == AudioSystem.NOT_SPECIFIED) {
                // some audio formats may have unspecified frame size
                // in that case we may read any amount of bytes
                bytesPerFrame = 1;
            }
            // Set an arbitrary buffer size of 512 frames.
            int numBytes = 512 * bytesPerFrame;
            byte[] audioBytes = new byte[numBytes];
            int numBytesRead = 0;
            // Try to read numBytes bytes from the file.
            while ((numBytesRead = inputStream.read(audioBytes)) != -1) {
                // Convert byte[] into float[] for processing
                float[] monoInput = byteArrayToFloatArray(audioBytes, numBytesRead);
                // Prepare input array with same wave on all channels
                float[][] vstInput = new float[effect.numInputs][];
                for (int i = 0; i < vstInput.length; i++)
                    vstInput[i] = monoInput;
                // Allocate output array of same size
                float[][] vstOutput = new float[effect.numOutputs][monoInput.length];

                effect.processReplacing(vstInput, vstOutput, vstInput[0].length);

                audioBytes = floatArrayToByteArray(vstOutput[0]);

                sourceLine.write(audioBytes, 0, numBytesRead);
            }
        } catch(IOException | LineUnavailableException | UnsupportedAudioFileException ex) {
        } 

        VST.dispose(effect);
    }

    private static float[] byteArrayToFloatArray(byte[] barray, int n) {

        // We assume n in between 0 and barray.length
        System.arraycopy(barray, 0, barray, 0, n);

        ByteBuffer bb = ByteBuffer.wrap(barray);
        FloatBuffer fb = bb.asFloatBuffer();
        float[] flush = new float[fb.capacity()];
        fb.get(flush);
        return flush;
    }

    private static byte[] floatArrayToByteArray(float[] farray) {

        ByteBuffer bb = ByteBuffer.allocate(farray.length*4);
        for (int i = 0; i < farray.length; i++)
            bb.putFloat(i*4, farray[i]);
        return bb.array();
    }
}

提前感谢您的帮助!

0 个答案:

没有答案