我正在使用语音识别创建应用程序并将其输出到TextView。 为了可视化此过程,我使用the example implementation。在出口处,我们大致可视化语音 -
实施如下:
// The sampling rate for the audio recorder.
private static final int SAMPLING_RATE = 44100;
private WaveformView mWaveformView;
private TextView mDecibelView;
private RecordingThread mRecordingThread;
private int mBufferSize;
private short[] mAudioBuffer;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.layout_waveform);
mWaveformView = (WaveformView) findViewById(R.id.waveform_view);
// Compute the minimum required audio buffer size and allocate the buffer.
mBufferSize = AudioRecord.getMinBufferSize(SAMPLING_RATE, AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT);
mAudioBuffer = new short[mBufferSize / 2];
}
@Override
protected void onResume() {
super.onResume();
mRecordingThread = new RecordingThread();
mRecordingThread.start();
}
本身就是记录的背景线程:
/**
* A background thread that receives audio from the microphone and sends it to the waveform
* visualizing view.
*/
private class RecordingThread extends Thread {
private boolean mShouldContinue = true;
@Override
public void run() {
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_AUDIO);
AudioRecord record = new AudioRecord(AudioSource.MIC, SAMPLING_RATE,
AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, mBufferSize);
record.startRecording();
while (shouldContinue()) {
record.read(mAudioBuffer, 0, mBufferSize / 2);
mWaveformView.updateAudioData(mAudioBuffer);
updateDecibelLevel();
}
record.stop();
record.release();
}
/**
* Gets a value indicating whether the thread should continue running.
*
* @return true if the thread should continue running or false if it should stop
*/
private synchronized boolean shouldContinue() {
return mShouldContinue;
}
/** Notifies the thread that it should stop running at the next opportunity. */
public synchronized void stopRunning() {
mShouldContinue = false;
}
语音识别的实现是标准方法,因此我认为这可能不会......
问题的实质
当我打开程序工作或可视化工具或识别而不是两者时,如何同时运行两者?