我是Android新手,我一直在研究Pitch Analyzer应用程序(最低SDK:8)。我阅读了许多关于如何实现Audiorecord类的文章,但我想知道为什么它在我录制时没有读取任何数据。我试图显示audioData和fftArray的值,但返回零,所以我认为问题在于read方法。请尝试检查这些。以下是我使用的代码:
record.java
final Intent intent = new Intent("pitch.analyzer.PitZer.ASSESSMENT");
MediaRecorder recorder;
AudioRecord tuner;
int audioSource = MediaRecorder.AudioSource.MIC;
int sampleRateInHz = AudioTrack.getNativeOutputSampleRate(AudioManager.STREAM_SYSTEM);
int channelConfig = AudioFormat.CHANNEL_CONFIGURATION_MONO;
int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
int bufferSizeInBytes = 4096;
int samples;
short[] audioBuffer;
short[] audioData;
double[] temp;
TextView fft;
TextView results;
//TextView bufferSize;
Complex[] fftTempArray;
Complex[] fftArray;
Complex[] fftInverse;
@Override
protected void onCreate(Bundle savedInstanceState) {
// TODO Auto-generated method stub
super.onCreate(savedInstanceState);
setContentView(R.layout.record);
Button start=(Button)findViewById(R.id.record);
Button stop=(Button)findViewById(R.id.stop);
fft = (TextView)findViewById(R.id.fft);
results = (TextView)findViewById(R.id.results);
//bufferSize = (TextView)findViewById(R.id.bufferSize);
audioData = new short[bufferSizeInBytes];
tuner = new AudioRecord(audioSource, sampleRateInHz, channelConfig, audioFormat, bufferSizeInBytes);
//final AudioRecorder recorder = new AudioRecorder("/audiometer/temp");
start.setOnClickListener(new OnClickListener() {
public void onClick(View v) {
acquire();
computeFFT();
display();
}
});
//….wait a while
stop.setOnClickListener(new OnClickListener() {
public void onClick(View v) {
startActivity(intent);
}
});
}
public void acquire(){
try {
tuner.startRecording();
samples = tuner.read(audioData, 0, bufferSizeInBytes);
}
catch (Throwable t){
}
}
public void computeFFT(){
//Conversion from short to double
double[] micBufferData = new double[bufferSizeInBytes];//size may need to change
final int bytesPerSample = 2; // As it is 16bit PCM
final double amplification = 100.0; // choose a number as you like
for (int index = 0, floatIndex = 0; index < bufferSizeInBytes - bytesPerSample + 1; index += bytesPerSample, floatIndex++) {
double sample = 0;
for (int b = 0; b < bytesPerSample; b++) {
int v = audioData[index + b];
if (b < bytesPerSample - 1 || bytesPerSample == 1) {
v &= 0xFF;
}
sample += v << (b * 8);
}
double sample32 = amplification * (sample / 32768.0);
micBufferData[floatIndex] = sample32;
}
//Create Complex array for use in FFT
fftTempArray = new Complex[bufferSizeInBytes];
for (int i=0; i<bufferSizeInBytes; i++)
{
fftTempArray[i] = new Complex(micBufferData[i], 0);
}
//Obtain array of FFT data
fftArray = FFT.fft(fftTempArray);
fftInverse = FFT.ifft(fftTempArray);
double[] freq2 = new double[fftArray.length];
//Create an array of magnitude of fftArray
double[] magnitude = new double[fftArray.length];
for (int i=0; i<fftArray.length; i++){
magnitude[i]= fftArray[i].abs();
freq2[i] = ComputeFrequency(magnitude[i]);
}
fft.setTextColor(Color.BLUE);
//fft.setText("fftArray is "+ fftArray[500] +" and fftTempArray is "+fftTempArray[500] + " and fftInverse is "+fftInverse[500]+" and audioData is "+audioData[500]+ " and magnitude is "+ magnitude[1] + ", "+magnitude[500]+", "+magnitude[1000]+ " and freq2 is "+ freq2[1]+" You rock dude!");
/*for(int i = 2; i < samples; i++){
fft.append(" " + magnitude[i] + " Hz");
}
for(int i = 2; i < samples; i++){
fft.append(" " + freq2[i] + " Hz");
}
*/
}
private double ComputeFrequency(double arrayIndex) {
return ((1.0 * sampleRateInHz) / (1.0 * 100)) * arrayIndex;
}
public void display(){
results.setTextColor(Color.BLUE);
results.setText("results: "+audioData[1]+"");
for(int i = 2; i < samples; i++){
results.append(" " + audioData[i]);
}
results.invalidate();
//fft.setTextColor(Color.GREEN);
fft.setText("sampleRateInHz: "+sampleRateInHz);
fft.append("\nfftArray: "+fftArray[0]+" Hz");
for(int i = 1; i < samples; i++){
fft.append(" " + fftArray[i] + " Hz");
}
fft.append("\naudioData: "+audioData[1]);
fft.append("\nsamples: "+samples);
//fft.invalidate();
}
public void stop() throws IOException {
tuner.stop();
//audioInput.reset();
tuner.release();
//recorder.stop();
//recorder.reset();
//recorder.release();
}
答案 0 :(得分:4)
在从设备读取之前,您应该开始录制(并在完成后停止录制)
这是我用于简单阅读的代码:
short[] audioData = new short[bufferSize];
int offset =0;
int shortRead = 0;
//start tapping into the microphone
audioRecored.startRecording();
//start reading from the microphone to an internal buffer - chuck by chunk
while (offset < bufferSize)
{
shortRead = audioRecored.read(audioData, offset ,bufferSize - offset);
offset += shortRead;
}
//stop tapping into the microphone
audioRecored.stop();
答案 1 :(得分:1)
还要在应用程序中检查麦克风是否仅使用一次。 版本23之前的所有设备均不允许无阻塞地读取麦克风流。 如果有两个从麦克风流读取的过程,则只有第一个过程正在获取真实数据。 其他返回-2或-3(读取计数)作为读取阻止的流的异常。