我正在尝试在识别语音时进行自定义对话,而不是使用官方对话框。我得到了那个部分,但是当那时我决定显示声音的幅度,同时识别,为了使它更加花哨,像谷歌现在的搜索栏一样(它围绕麦克风,如果声音更大声增长):< / p>
googlenow http://img600.imageshack.us/img600/3459/gnow.png
然后我开始编码如何获得声音的振幅,最后我用AudioRecord Class获得它。
当我尝试混合两者(SpeechRecognizer和AudioRecord)时出现问题,因为他们似乎无法共享麦克风,或类似的东西......
在logcat中我有这个错误:
03-03 21:16:07.461: E/ListenerAdapter(23359): onError
03-03 21:16:07.461: E/ListenerAdapter(23359): com.google.android.speech.embedded.Greco3RecognitionEngine$EmbeddedRecognizerUnavailableException: Embedded recognizer unavailable
03-03 21:16:07.461: E/ListenerAdapter(23359): at com.google.android.speech.embedded.Greco3RecognitionEngine.startRecognition(Greco3RecognitionEngine.java:108)
03-03 21:16:07.461: E/ListenerAdapter(23359): at java.lang.reflect.Method.invokeNative(Native Method)
03-03 21:16:07.461: E/ListenerAdapter(23359): at java.lang.reflect.Method.invoke(Method.java:511)
03-03 21:16:07.461: E/ListenerAdapter(23359): at com.google.android.searchcommon.utils.ThreadChanger$1$1.run(ThreadChanger.java:77)
03-03 21:16:07.461: E/ListenerAdapter(23359): at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:390)
03-03 21:16:07.461: E/ListenerAdapter(23359): at java.util.concurrent.FutureTask.run(FutureTask.java:234)
03-03 21:16:07.461: E/ListenerAdapter(23359): at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:153)
03-03 21:16:07.461: E/ListenerAdapter(23359): at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
03-03 21:16:07.461: E/ListenerAdapter(23359): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080)
03-03 21:16:07.461: E/ListenerAdapter(23359): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573)
03-03 21:16:07.461: E/ListenerAdapter(23359): at com.google.android.searchcommon.utils.ConcurrentUtils$2$1.run(ConcurrentUtils.java:112)
还有一些我有这个:
03-03 21:47:13.344: E/ListenerAdapter(23359): onError
03-03 21:47:13.344: E/ListenerAdapter(23359): com.google.android.speech.exception.AudioRecognizeException: Audio error
03-03 21:47:13.344: E/ListenerAdapter(23359): at com.google.android.speech.embedded.Greco3Recognizer.read(Greco3Recognizer.java:107)
03-03 21:47:13.344: E/ListenerAdapter(23359): at dalvik.system.NativeStart.run(Native Method)
03-03 21:47:13.344: E/ListenerAdapter(23359): Caused by: java.io.IOException: couldn't start recording, state is:1
03-03 21:47:13.344: E/ListenerAdapter(23359): at com.google.android.speech.audio.MicrophoneInputStream.ensureStartedLocked(MicrophoneInputStream.java:119)
03-03 21:47:13.344: E/ListenerAdapter(23359): at com.google.android.speech.audio.MicrophoneInputStream.read(MicrophoneInputStream.java:159)
03-03 21:47:13.344: E/ListenerAdapter(23359): at com.google.common.io.ByteStreams.read(ByteStreams.java:806)
03-03 21:47:13.344: E/ListenerAdapter(23359): at com.google.android.speech.audio.Tee.readFromDelegate(Tee.java:374)
03-03 21:47:13.344: E/ListenerAdapter(23359): at com.google.android.speech.audio.Tee.readLeader(Tee.java:267)
03-03 21:47:13.344: E/ListenerAdapter(23359): at com.google.android.speech.audio.Tee$TeeLeaderInputStream.read(Tee.java:464)
03-03 21:47:13.344: E/ListenerAdapter(23359): at java.io.InputStream.read(InputStream.java:163)
03-03 21:47:13.344: E/ListenerAdapter(23359): at com.google.android.speech.audio.AudioSource$CaptureThread.run(AudioSource.java:193)
这就是我推出两者的方式:
//previously in constructor
speechrec = SpeechRecognizer.createSpeechRecognizer(getActivity());
speechrec.setRecognitionListener(this);
//
public void launchListening()
{
if (speechrec.isRecognitionAvailable(getActivity()))
{
Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
speechrec.startListening(intent);
}
bufferSize = AudioRecord.getMinBufferSize(sampleRate, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);// * bufferSizeFactor;
audio = new AudioRecord(MediaRecorder.AudioSource.MIC, sampleRate, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize);
audio.startRecording();
captureThread = new Thread(new Runnable()
{
public void run()
{
//calculate amplitude here
}
});
captureThread.start();
}
关于如何创建用于语音识别的自定义对话框的任何想法,我可以根据噪声显示振幅,就像Google一样?
答案 0 :(得分:3)
这样做的方法是使用SpeechRecognizer
注册一个监听器,并可视化onRmsChanged的输出。但请注意:
无法保证将调用此方法。
因此,您正在使用的语音识别器需要支持此方法。请注意,SpeechRecognizer.createSpeechRecognizer(getActivity())
的返回值取决于用户设备的配置。
(AudioRecord
正在录制时无法启动SpeechRecognizer
,反之亦然。)