这是我用来在android studio中生成连续正弦波的一些代码。整个事情在一个线程内运行。我的问题是:当我调用audio.write()时,可能仍然在缓冲区中的任何数据会发生什么?它是否会转储旧样本并编写一个新集合,还是将新样本数组附加到剩余样本中?
int buffSize = AudioTrack.getMinBufferSize(sr, AudioFormat.CHANNEL_OUT_MONO,AudioFormat.ENCODING_PCM_16BIT);
//create the AudioTrack object
AudioTrack audio = new AudioTrack( AudioManager.STREAM_MUSIC,
sr,
AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT,
buffSize,
AudioTrack.MODE_STREAM);
//initialise values for synthesis
short samples[]= new short[buffSize]; //array the same size as buffer
int amp=10000; //amplitude of the waveform
double twopi = 8.*Math.tan(1.); //2*pi
double fr = 440; //the frequency to create
double ph = 0; //phase shift
//start audio
audio.play();
//synthesis loop
while (isRunning)
{
fr=440+4.4*sliderVal;
for (int i=0;i<buffSize;i++)
{
samples[i]=(short)(amp*Math.sin(ph));
ph+=twopi*fr/sr;
}
audio.write(samples,0,buffSize);
}
//stop the audio track
audio.stop();
audio.release();
答案 0 :(得分:1)
您正在根据设备功能正确设置缓冲区大小 - 这对于最大限度地减少延迟非常重要。
然后,您正在构建缓冲区并将它们分块到硬件中,以便可以听到它们。每个人都没有“在那里”。缓冲区构建完毕,然后每次在track.write中写入整个缓冲区。
以下是我的generateTone例程,与您的非常相似。称为频率为Hz,持续时间为ms:
AudioTrack sound = generateTone(440, 250);
generateTone类:
private AudioTrack generateTone(double freqHz, int durationMs) {
int count = (int)(44100.0 * 2.0 * (durationMs / 1000.0)) & ~1;
short[] samples = new short[count];
for(int i = 0; i < count; i += 2){
short sample = (short)(Math.sin(2 * Math.PI * i / (44100.0 / freqHz)) * 0x7FFF);
samples[i + 0] = sample;
samples[i + 1] = sample;
}
AudioTrack track = new AudioTrack(AudioManager.STREAM_MUSIC, 44100,
AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT,
count * (Short.SIZE / 8), AudioTrack.MODE_STATIC);
track.write(samples, 0, count);
return track;
}
AudioTrack很酷,因为如果你有正确的算法,你可以创造任何类型的声音。 Puredata和Csound在Android上让它变得更容易。
(我在“Android软件开发 - 实用项目集合”一书中写了一篇关于音频的重要章节。)