我创建了一个简单的Android应用程序来控制连接到我的Raspberry Pi的中继。我使用按钮和基本语音识别来触发这些按钮并打开/关闭相应的中继通道。
截至目前,语音识别部分由RecognizerIntent处理,其中我需要按我的应用程序上的按钮打开Google语音提示,该提示会侦听我的语音命令并激活/停用控制继电器开关的相应按钮
我想用连续语音识别来做同样的事情,它允许应用程序连续收听我的命令而无需用户按下应用程序上的按钮,因此允许免提操作。 < / p>
这是我现有的代码,一种非常简单的语音识别方式,可以让我为连接到继电器的各种设备打开和关闭按钮:
public void micclick(View view) {
if(view.getId()==R.id.mic)
{promptSpeechInput();}
}
private void promptSpeechInput() {
Intent i= new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
i.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
i.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault());
i.putExtra(RecognizerIntent.EXTRA_PROMPT,"Speak!");
try{
startActivityForResult(i,100);
}
catch (ActivityNotFoundException a)
{
Toast.makeText(MainActivity.this,"Sorry your device doesn't support",Toast.LENGTH_SHORT).show();
}
}
public void onActivityResult(int requestCode, int resultCode, Intent i) {
super.onActivityResult(requestCode, resultCode, i);
String voicetxt;
switch (requestCode) {
case 100:
if (resultCode == RESULT_OK && i != null) {
ArrayList<String> result2 = i.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
voicetxt = result2.get(0);
if (voicetxt.equals("fan on")) {
StringBuffer result=new StringBuffer();
toggleButton1.setChecked(true);
result.append("Fan: ").append(toggleButton1.getText());
sc.onRelayNumber="a";
new Thread(sc).start();
Toast.makeText(MainActivity.this, result.toString(),Toast.LENGTH_SHORT).show();
}
if (voicetxt.equals("fan of")) {
StringBuffer result=new StringBuffer();
toggleButton1.setChecked(false);
result.append("Fan: ").append(toggleButton1.getText());
sc.onRelayNumber = "a_off";
new Thread(sc).start();
Toast.makeText(MainActivity.this, result.toString(),Toast.LENGTH_SHORT).show();
}
if (voicetxt.equals("light on")) {
StringBuffer result=new StringBuffer();
toggleButton2.setChecked(true);
result.append("Light: ").append(toggleButton2.getText());
sc.onRelayNumber = "b";
new Thread(sc).start();
Toast.makeText(MainActivity.this, result.toString(),Toast.LENGTH_SHORT).show();
}
if (voicetxt.equals("light off")) {
StringBuffer result=new StringBuffer();
toggleButton2.setChecked(false);
result.append("Light: ").append(toggleButton2.getText());
sc.onRelayNumber = "b_off";
new Thread(sc).start();
Toast.makeText(MainActivity.this, result.toString(),Toast.LENGTH_SHORT).show();
}
if (voicetxt.equals("air conditioner on")) {
StringBuffer result=new StringBuffer();
toggleButton3.setChecked(true);
result.append("AC: ").append(toggleButton3.getText());
sc.onRelayNumber = "c";
new Thread(sc).start();
Toast.makeText(MainActivity.this, result.toString(),Toast.LENGTH_SHORT).show();
}
if (voicetxt.equals("air conditioner of")) {
StringBuffer result=new StringBuffer();
toggleButton3.setChecked(false);
result.append("AC: ").append(toggleButton3.getText());
sc.onRelayNumber = "c_off";
new Thread(sc).start();
Toast.makeText(MainActivity.this, result.toString(),Toast.LENGTH_SHORT).show();
}
if (voicetxt.equals("heater on")) {
StringBuffer result=new StringBuffer();
toggleButton4.setChecked(true);
result.append("Heater: ").append(toggleButton4.getText());
sc.onRelayNumber = "d";
new Thread(sc).start();
Toast.makeText(MainActivity.this, result.toString(),Toast.LENGTH_SHORT).show();
}
if (voicetxt.equals("heater off")) {
StringBuffer result=new StringBuffer();
toggleButton4.setChecked(false);
result.append("Heater: ").append(toggleButton4.getText());
sc.onRelayNumber = "d_off";
new Thread(sc).start();
Toast.makeText(MainActivity.this, result.toString(),Toast.LENGTH_SHORT).show();
}
}
break;
}
}
我希望无需按下按钮即可实现相同的功能。请注意,我是Android应用开发的新手。如果可能的话,请描述外部图书馆的使用情况,如果他们是必需的,因为我不认为使用Google的RecognizerIntent可以持续识别。我推测我可能需要包含像 CMUSphinx 这样的库,但我不知道该怎么做。
答案 0 :(得分:1)
您可以为连续识别/听写模式做几件事。您可以使用Android本身的谷歌语音识别,不推荐用于连续识别(如https://developer.android.com/reference/android/speech/SpeechRecognizer.html所述)
此API的实现可能会将音频流式传输到远程 服务器执行语音识别。因此,这个API不是 旨在用于连续识别,这将消耗一个 大量的电池和带宽。
但是如果你真的需要它,你可以通过创建自己的类并继承IRecognitionListener来做一个解决方法。 (我在xamarin-android上写了这个,语法与原生android非常相似)
public class CustomRecognizer : Java.Lang.Object, IRecognitionListener, TextToSpeech.IOnInitListener
{
private SpeechRecognizer _speech;
private Intent _speechIntent;
public string Words;
public CustomRecognizer(Context _context)
{
this._context = _context;
Words = "";
_speech = SpeechRecognizer.CreateSpeechRecognizer(this._context);
_speech.SetRecognitionListener(this);
_speechIntent = new Intent(RecognizerIntent.ActionRecognizeSpeech);
_speechIntent.PutExtra(RecognizerIntent.ExtraLanguageModel, RecognizerIntent.LanguageModelFreeForm);
_speechIntent.PutExtra(RecognizerIntent.ActionRecognizeSpeech, RecognizerIntent.ExtraPreferOffline);
_speechIntent.PutExtra(RecognizerIntent.ExtraSpeechInputCompleteSilenceLengthMillis, 1000);
_speechIntent.PutExtra(RecognizerIntent.ExtraSpeechInputPossiblyCompleteSilenceLengthMillis, 1000);
_speechIntent.PutExtra(RecognizerIntent.ExtraSpeechInputMinimumLengthMillis, 1500);
}
void startover()
{
_speech.Destroy();
_speech = SpeechRecognizer.CreateSpeechRecognizer(this._context);
_speech.SetRecognitionListener(this);
_speechIntent = new Intent(RecognizerIntent.ActionRecognizeSpeech);
_speechIntent.PutExtra(RecognizerIntent.ExtraSpeechInputCompleteSilenceLengthMillis, 1000);
_speechIntent.PutExtra(RecognizerIntent.ExtraSpeechInputPossiblyCompleteSilenceLengthMillis, 1000);
_speechIntent.PutExtra(RecognizerIntent.ExtraSpeechInputMinimumLengthMillis, 1500);
StartListening();
}
public void StartListening()
{
_speech.StartListening(_speechIntent);
}
public void StopListening()
{
_speech.StopListening();
}
public void OnBeginningOfSpeech()
{
}
public void OnBufferReceived(byte[] buffer)
{
}
public void OnEndOfSpeech()
{
}
public void OnError([GeneratedEnum] SpeechRecognizerError error)
{
Words = error.ToString();
startover();
}
public void OnEvent(int eventType, Bundle @params)
{
}
public void OnPartialResults(Bundle partialResults)
{
}
public void OnReadyForSpeech(Bundle @params)
{
}
public void OnResults(Bundle results)
{
var matches = results.GetStringArrayList(SpeechRecognizer.ResultsRecognition);
if (matches == null)
Words = "Null";
else
if (matches.Count != 0)
Words = matches[0];
else
Words = "";
//do anything you want for the result
}
startover();
}
public void OnRmsChanged(float rmsdB)
{
}
public void OnInit([GeneratedEnum] OperationResult status)
{
if (status == OperationResult.Error)
txtspeech.SetLanguage(Java.Util.Locale.Default);
}
}
在活动中调用它:
void StartRecording()
{
string rec = PackageManager.FeatureMicrophone;
if (rec != "android.hardware.microphone")
{
// no microphone, no recording. Disable the button and output an alert
Toast.MakeText(this, "NO MICROPHONE", ToastLength.Short);
}
else
{
//you can pass any object you want to connect to your recognizer here (I am passing the activity)
CustomRecognizer voice = new CustomRecognizer(this);
voice.StartListening();
}
}
请勿忘记请求使用麦克风的权限!
说明:
- 这将删除恼人的&#34;点击开始录制&#34;
- 这将始终记录您调用StartListening()并永不停止的那一刻,因为每次完成录制时我总是调用startover()或StartListening()
- 这是一个非常糟糕的解决方法,因为它处理你的录音时,录音机不会得到任何声音输入,直到它调用StartListening()(没有解决方法)
- 谷歌识别对于语音命令并不是很好,因为语言模型是&#34; [lang]句子&#34;,所以你不能限制这个词,谷歌总会尝试制作一个& #34;好句子&#34;结果。
为了获得更好的结果和用户体验,我真的建议您使用Google Cloud API(但它必须在线且价格昂贵),第二个建议是CMUSphinx / PocketSphinx,它是开源的,可以做离线模式,但你必须手动完成所有事情
PocketSphinx优势:
离线模式兼容
您可以对声学模型(语音等)进行自己的培训,因此您可以根据您的环境和发音进行配置
PocketSphinx的缺点:你必须手动完成所有事情,包括设置你的声学模型,字典,语言模型,门槛等等。(如果你想要简单的东西,那就太过分了。)