在Java中同时播放多个字节数组

时间:2014-10-08 20:12:05

标签: java audio concurrency javasound

如何同时播放多个(音频)字节数组?这个"字节数组"由TargetDataLine记录,使用服务器传输。

到目前为止我尝试了什么

使用SourceDataLine:

无法使用SourceDataLine播放多个流,因为write方法会一直阻塞,直到写入缓冲区。使用Threads无法修复此问题,因为只有一个SourceDataLine可以同时写入。

使用AudioPlayer类:

ByteInputStream stream2 = new ByteInputStream(data, 0, data.length);
AudioInputStream stream = new AudioInputStream(stream2, VoiceChat.format, data.length);
AudioPlayer.player.start(stream);

这只会对客户产生噪音。

EDIT 我不会同时收到语音数据包,不会同时收到语音数据包,更多地重复#34;

3 个答案:

答案 0 :(得分:4)

显然,Java的Mixer接口不是为此而设计的。

http://docs.oracle.com/javase/7/docs/api/javax/sound/sampled/Mixer.html

  

调音台是具有一条或多条线路的音频设备。它不一定是   设计用于混合音频信号。

事实上,当我尝试在同一个调音台上打开多行时,这会失败并显示LineUnavailableException。但是,如果您的所有录音都具有相同的音频格式,则可以很容易地将它们手动混合在一起。例如,如果您有2个输入:

  1. 将两者转换为适当的数据类型(例如{8}音频为byte[],16位为short[],32位浮点为float[]等。
  2. 在另一个数组中求和它们。确保求和值不超过数据类型的范围。
  3. 将输出转换回字节并将其写入SourceDataLine
  4. 另见How is audio represented with numbers?

    这是一个混合2个录音并输出为1个信号的示例,全部采用16位48Khz立体声。

        // print all devices (both input and output)
        int i = 0;
        Mixer.Info[] infos = AudioSystem.getMixerInfo();
        for (Mixer.Info info : infos)
            System.out.println(i++ + ": " + info.getName());
    
        // select 2 inputs and 1 output
        System.out.println("Select input 1: ");
        int in1Index = Integer.parseInt(System.console().readLine());
        System.out.println("Select input 2: ");
        int in2Index = Integer.parseInt(System.console().readLine());
        System.out.println("Select output: ");
        int outIndex = Integer.parseInt(System.console().readLine());
    
        // ugly java sound api stuff
        try (Mixer in1Mixer = AudioSystem.getMixer(infos[in1Index]);
                Mixer in2Mixer = AudioSystem.getMixer(infos[in2Index]);
                Mixer outMixer = AudioSystem.getMixer(infos[outIndex])) {
            in1Mixer.open();
            in2Mixer.open();
            outMixer.open();
            try (TargetDataLine in1Line = (TargetDataLine) in1Mixer.getLine(in1Mixer.getTargetLineInfo()[0]);
                    TargetDataLine in2Line = (TargetDataLine) in2Mixer.getLine(in2Mixer.getTargetLineInfo()[0]);
                    SourceDataLine outLine = (SourceDataLine) outMixer.getLine(outMixer.getSourceLineInfo()[0])) {
    
                // audio format 48khz 16 bit stereo (signed litte endian)
                AudioFormat format = new AudioFormat(48000.0f, 16, 2, true, false);
    
                // 4 bytes per frame (16 bit samples stereo)
                int frameSize = 4;
                int bufferSize = 4800;
                int bufferBytes = frameSize * bufferSize;
    
                // buffers for java audio
                byte[] in1Bytes = new byte[bufferBytes];
                byte[] in2Bytes = new byte[bufferBytes];
                byte[] outBytes = new byte[bufferBytes];
    
                // buffers for mixing
                short[] in1Samples = new short[bufferBytes / 2];
                short[] in2Samples = new short[bufferBytes / 2];
                short[] outSamples = new short[bufferBytes / 2];
    
                // how long to record & play
                int framesProcessed = 0;
                int durationSeconds = 10;
                int durationFrames = (int) (durationSeconds * format.getSampleRate());
    
                // open devices
                in1Line.open(format, bufferBytes);
                in2Line.open(format, bufferBytes);
                outLine.open(format, bufferBytes);
                in1Line.start();
                in2Line.start();
                outLine.start();
    
                // start audio loop
                while (framesProcessed < durationFrames) {
    
                    // record audio
                    in1Line.read(in1Bytes, 0, bufferBytes);
                    in2Line.read(in2Bytes, 0, bufferBytes);
    
                    // convert input bytes to samples
                    ByteBuffer.wrap(in1Bytes).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(in1Samples);
                    ByteBuffer.wrap(in2Bytes).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(in2Samples);
    
                    // mix samples - lower volume by 50% since we're mixing 2 streams
                    for (int s = 0; s < bufferBytes / 2; s++)
                        outSamples[s] = (short) ((in1Samples[s] + in2Samples[s]) * 0.5);
    
                    // convert output samples to bytes
                    ByteBuffer.wrap(outBytes).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(outSamples);
    
                    // play audio
                    outLine.write(outBytes, 0, bufferBytes);
    
                    framesProcessed += bufferBytes / frameSize;
                }
    
                in1Line.stop();
                in2Line.stop();
                outLine.stop();
            }
        }
    

答案 1 :(得分:2)

好吧,我把一些东西放在一起,这应该可以让你开始。我将在下面发布完整的代码,但我将首先尝试解释所涉及的步骤。

这里有趣的部分是创建你自己的音频&#34;混音器&#34;允许该类消费者在(近)未来的特定点安排音频块的类。具体的时间点部分在这里很重要:我假设您在数据包中接收网络语音,其中每个数据包需要准确地从前一个数据包的末尾开始,以便为单个数据播放连续的声音语音。此外,因为你说声音可以重叠我假设(是的,很多假设)新的一个可以通过网络进入,而一个或多个旧的仍在播放。因此,允许从任何线程调度音频块似乎是合理的。请注意,只有一个线程实际写入数据线,只是任何线程都可以将音频数据包提交给调音台。

因此,对于submit-audio-packet部分,我们现在有了这个:

private final ConcurrentLinkedQueue<QueuedBlock> scheduledBlocks;
public void mix(long when, short[] block) {
    scheduledBlocks.add(new QueuedBlock(when, Arrays.copyOf(block, block.length)));
}

QueuedBlock类仅用于标记字节数组(音频缓冲区)和&#34;当&#34;:应该播放该块的时间点。

时间点相对于音频流的当前位置表示。每次将音频缓冲区写入数据线时,在创建流时将其设置为零并使用缓冲区大小进行更新:

private final AtomicLong position = new AtomicLong();
public long position() {
    return position.get();
}

除了设置数据线的所有麻烦之外,混音器类的有趣部分显然是混音发生的地方。对于每个预定的音频块,它分为3种情况:

  • 该块已经全部播放。从scheduledBlocks列表中删除。
  • 该块被安排在当前缓冲区之后的某个时间点开始。什么都不做。
  • (部分)块应该混合到当前缓冲区中。请注意,块的开头可能(或可能不)已在先前的缓冲区中播放。类似地,预定块的结束可能超过当前缓冲区的末尾,在这种情况下,我们将其第一部分混合并将剩余部分留给下一轮,直到所有它都被播放,整个块被移除。

另请注意,没有可靠的方法可以立即开始播放音频数据,当您将数据包提交到调音台时,请务必始终让它们从现在开始至少持续1个音频缓冲区,否则您需要这样做。将失去声音开始的风险。这是混音代码:

    private static final double MIXDOWN_VOLUME = 1.0 / NUM_PRODUCERS;

    private final List<QueuedBlock> finished = new ArrayList<>();
    private final short[] mixBuffer = new short[BUFFER_SIZE_FRAMES * CHANNELS];
    private final byte[] audioBuffer = new byte[BUFFER_SIZE_FRAMES * CHANNELS * 2];
    private final AtomicLong position = new AtomicLong();

    Arrays.fill(mixBuffer, (short) 0);
    long bufferStartAt = position.get();
    for (QueuedBlock block : scheduledBlocks) {
        int blockFrames = block.data.length / CHANNELS;

        // block fully played - mark for deletion
        if (block.when + blockFrames <= bufferStartAt) {
            finished.add(block);
            continue;
        }

        // block starts after end of current buffer
        if (bufferStartAt + BUFFER_SIZE_FRAMES <= block.when)
            continue;

        // mix in part of the block which overlaps current buffer
        int blockOffset = Math.max(0, (int) (bufferStartAt - block.when));
        int blockMaxFrames = blockFrames - blockOffset;
        int bufferOffset = Math.max(0, (int) (block.when - bufferStartAt));
        int bufferMaxFrames = BUFFER_SIZE_FRAMES - bufferOffset;
        for (int f = 0; f < blockMaxFrames && f < bufferMaxFrames; f++)
            for (int c = 0; c < CHANNELS; c++) {
                int bufferIndex = (bufferOffset + f) * CHANNELS + c;
                int blockIndex = (blockOffset + f) * CHANNELS + c;
                mixBuffer[bufferIndex] += (short)
                    (block.data[blockIndex]*MIXDOWN_VOLUME);
            }
    }

    scheduledBlocks.removeAll(finished);
    finished.clear();
    ByteBuffer
        .wrap(audioBuffer)
        .order(ByteOrder.LITTLE_ENDIAN)
        .asShortBuffer()
        .put(mixBuffer);
    line.write(audioBuffer, 0, audioBuffer.length);
    position.addAndGet(BUFFER_SIZE_FRAMES);

最后是一个完整的,自包含的样本,它产生了许多线程,提交代表随机持续时间和频率的正弦波的音频块到混音器(在此示例中称为AudioConsumer)。通过传入的网络数据包替换正弦波,您应该已经解决了一半。

package test;

import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.ConcurrentLinkedQueue;
import java.util.concurrent.ThreadLocalRandom;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicLong;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.Line;
import javax.sound.sampled.Mixer;
import javax.sound.sampled.SourceDataLine;

public class Test {

public static final int CHANNELS = 2;
public static final int SAMPLE_RATE = 48000;
public static final int NUM_PRODUCERS = 10;
public static final int BUFFER_SIZE_FRAMES = 4800;

// generates some random sine wave
public static class ToneGenerator {

    private static final double[] NOTES = {261.63, 311.13, 392.00};
    private static final double[] OCTAVES = {1.0, 2.0, 4.0, 8.0};
    private static final double[] LENGTHS = {0.05, 0.25, 1.0, 2.5, 5.0};

    private double phase;
    private int framesProcessed;
    private final double length;
    private final double frequency;

    public ToneGenerator() {
        ThreadLocalRandom rand = ThreadLocalRandom.current();
        length = LENGTHS[rand.nextInt(LENGTHS.length)];
        frequency = NOTES[rand.nextInt(NOTES.length)] * OCTAVES[rand.nextInt(OCTAVES.length)];
    }

    // make sound
    public void fill(short[] block) {
        for (int f = 0; f < block.length / CHANNELS; f++) {
            double sample = Math.sin(phase * 2.0 * Math.PI);
            for (int c = 0; c < CHANNELS; c++)
                block[f * CHANNELS + c] = (short) (sample * Short.MAX_VALUE);
            phase += frequency / SAMPLE_RATE;
        }
        framesProcessed += block.length / CHANNELS;
    }

    // true if length of tone has been generated
    public boolean done() {
        return framesProcessed >= length * SAMPLE_RATE;
    }
}

// dummy audio producer, based on sinewave generator
// above but could also be incoming network packets
public static class AudioProducer {

    final Thread thread;
    final AudioConsumer consumer;
    final short[] buffer = new short[BUFFER_SIZE_FRAMES * CHANNELS];

    public AudioProducer(AudioConsumer consumer) {
        this.consumer = consumer;
        thread = new Thread(() -> run());
        thread.setDaemon(true);
    }

    public void start() {
        thread.start();
    }

    // repeatedly play random sine and sleep for some time
    void run() {
        try {
            ThreadLocalRandom rand = ThreadLocalRandom.current();
            while (true) {
                long pos = consumer.position();
                ToneGenerator g = new ToneGenerator();

                // if we schedule at current buffer position, first part of the tone will be
                // missed so have tone start somewhere in the middle of the next buffer
                pos += BUFFER_SIZE_FRAMES + rand.nextInt(BUFFER_SIZE_FRAMES);
                while (!g.done()) {
                    g.fill(buffer);
                    consumer.mix(pos, buffer);
                    pos += BUFFER_SIZE_FRAMES;

                    // we can generate audio faster than it's played
                    // sleep a while to compensate - this more closely
                    // corresponds to playing audio coming in over the network
                    double bufferLengthMillis = BUFFER_SIZE_FRAMES * 1000.0 / SAMPLE_RATE;
                    Thread.sleep((int) (bufferLengthMillis * 0.9));
                }

                // sleep a while in between tones
                Thread.sleep(1000 + rand.nextInt(2000));
            }
        } catch (Throwable t) {
            System.out.println(t.getMessage());
            t.printStackTrace();
        }
    }
}

// audio consumer - plays continuously on a background
// thread, allows audio to be mixed in from arbitrary threads
public static class AudioConsumer {

    // audio block with "when to play" tag
    private static class QueuedBlock {

        final long when;
        final short[] data;

        public QueuedBlock(long when, short[] data) {
            this.when = when;
            this.data = data;
        }
    }

    // need not normally be so low but in this example
    // we're mixing down a bunch of full scale sinewaves
    private static final double MIXDOWN_VOLUME = 1.0 / NUM_PRODUCERS;

    private final List<QueuedBlock> finished = new ArrayList<>();
    private final short[] mixBuffer = new short[BUFFER_SIZE_FRAMES * CHANNELS];
    private final byte[] audioBuffer = new byte[BUFFER_SIZE_FRAMES * CHANNELS * 2];

    private final Thread thread;
    private final AtomicLong position = new AtomicLong();
    private final AtomicBoolean running = new AtomicBoolean(true);
    private final ConcurrentLinkedQueue<QueuedBlock> scheduledBlocks = new ConcurrentLinkedQueue<>();


    public AudioConsumer() {
        thread = new Thread(() -> run());
    }

    public void start() {
        thread.start();
    }

    public void stop() {
        running.set(false);
    }

    // gets the play cursor. note - this is not accurate and 
    // must only be used to schedule blocks relative to other blocks
    // (e.g., for splitting up continuous sounds into multiple blocks)
    public long position() {
        return position.get();
    }

    // put copy of audio block into queue so we don't
    // have to worry about caller messing with it afterwards
    public void mix(long when, short[] block) {
        scheduledBlocks.add(new QueuedBlock(when, Arrays.copyOf(block, block.length)));
    }

    // better hope mixer 0, line 0 is output
    private void run() {
        Mixer.Info[] mixerInfo = AudioSystem.getMixerInfo();
        try (Mixer mixer = AudioSystem.getMixer(mixerInfo[0])) {
            Line.Info[] lineInfo = mixer.getSourceLineInfo();
            try (SourceDataLine line = (SourceDataLine) mixer.getLine(lineInfo[0])) {
                line.open(new AudioFormat(SAMPLE_RATE, 16, CHANNELS, true, false), BUFFER_SIZE_FRAMES);
                line.start();
                while (running.get())
                    processSingleBuffer(line);
                line.stop();
            }
        } catch (Throwable t) {
            System.out.println(t.getMessage());
            t.printStackTrace();
        }
    }

    // mix down single buffer and offer to the audio device
    private void processSingleBuffer(SourceDataLine line) {

        Arrays.fill(mixBuffer, (short) 0);
        long bufferStartAt = position.get();

        // mixdown audio blocks
        for (QueuedBlock block : scheduledBlocks) {

            int blockFrames = block.data.length / CHANNELS;

            // block fully played - mark for deletion
            if (block.when + blockFrames <= bufferStartAt) {
                finished.add(block);
                continue;
            }

            // block starts after end of current buffer
            if (bufferStartAt + BUFFER_SIZE_FRAMES <= block.when)
                continue;

            // mix in part of the block which overlaps current buffer
            // note that block may have already started in the past
            // but extends into the current buffer, or that it starts
            // in the future but before the end of the current buffer
            int blockOffset = Math.max(0, (int) (bufferStartAt - block.when));
            int blockMaxFrames = blockFrames - blockOffset;
            int bufferOffset = Math.max(0, (int) (block.when - bufferStartAt));
            int bufferMaxFrames = BUFFER_SIZE_FRAMES - bufferOffset;
            for (int f = 0; f < blockMaxFrames && f < bufferMaxFrames; f++)
                for (int c = 0; c < CHANNELS; c++) {
                    int bufferIndex = (bufferOffset + f) * CHANNELS + c;
                    int blockIndex = (blockOffset + f) * CHANNELS + c;
                    mixBuffer[bufferIndex] += (short) (block.data[blockIndex] * MIXDOWN_VOLUME);
                }
        }

        scheduledBlocks.removeAll(finished);
        finished.clear();
        ByteBuffer.wrap(audioBuffer).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(mixBuffer);
        line.write(audioBuffer, 0, audioBuffer.length);
        position.addAndGet(BUFFER_SIZE_FRAMES);
    }
}

public static void main(String[] args) {

    System.out.print("Press return to exit...");
    AudioConsumer consumer = new AudioConsumer();
    consumer.start();
    for (int i = 0; i < NUM_PRODUCERS; i++)
        new AudioProducer(consumer).start();
    System.console().readLine();
    consumer.stop();
}
}

答案 2 :(得分:0)

你可以使用Tritontus库来进行软件音频混音(它已经过时但效果还不错)。

将依赖项添加到项目中:

<dependency>
    <groupId>com.googlecode.soundlibs</groupId>
    <artifactId>tritonus-all</artifactId>
    <version>0.3.7.2</version>
</dependency>

使用org.tritonus.share.sampled.FloatSampleBuffer。在调用AudioFormat之前,两个缓冲区必须相同#mix

// TODO instantiate these variables with real data
byte[] audio1, audio2;
AudioFormat af1, af2;
SourceDataLine sdl = AudioSystem.getSourceDataLine(af1);

FloatSampleBuffer fsb1 = new FloatSampleBuffer(audio1, 0, audio1.length, af1.getFormat());
FloatSampleBuffer fsb2 = new FloatSampleBuffer(audio2, 0, audio2.length, af2.getFormat());

fsb1.mix(fsb2);
byte[] result = fsb1.convertToByteArray(af1);

sdl.write(result, 0, result.length); // play it