I'm trying to draw a spectral representation of some recorded audio. Using the AudioRecord class, i get an array of bytes with amplitude values i guess.
Now the problem is that i don't know how to represent the time axis.
Any ideas??
Recording code:
public void startRecording(View v) {
mIsRecording = true;
mRecorder.startRecording();
new Thread(new Runnable() {
@Override
public void run() {
while (mIsRecording) {
int readSize = mRecorder.read(mBuffer, 0, mBuffer.length);
}
}
}).start();
}
Recorder init:
private void initRecorder() {
int bufferSize = AudioRecord.getMinBufferSize(SAMPLE_RATE, AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT);
mBuffer = new short[bufferSize];
mRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC, SAMPLE_RATE, AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT, bufferSize);
}
Thanks in advance.
答案 0 :(得分:0)
首先,当你谈论频谱时,你指的是音频信号的频率成分,这与振幅非常不同。
无论如何,如果要绘制幅度随时间的变化, x轴将是 y轴幅度的时间。这里的关键值是采样率,它定义了您每秒记录的采样数;您需要决定在屏幕上绘制平均振幅值的时间(与采样率相关),然后......只需绘制它。我建议你从更简单的事情开始,比如Samsung official docs中的以下示例:
AudioRecord recorder; // our recorder, must be initialized first short[] buffer; // buffer where we will put captured samples DataOutputStream output; // output stream to target file boolean isRecording; // indicates if sound is currently being captured ProgressBar pb; // our progress bar recieved from layout while (isRecording) { double sum = 0; int readSize = recorder.read(buffer, 0, buffer.length); for (int i = 0; i < readSize; i++) { output.writeShort(buffer [i]); sum += buffer [i] * buffer [i]; } if (readSize > 0) { final double amplitude = sum / readSize; pb.setProgress((int) Math.sqrt(amplitude)); } }
此示例显示实时幅度值(使用ProgressBar
);您可以调整此示例并按时间顺序绘制幅度值(例如,在Canvas
中绘制连续的垂直线)