我如何编写语音触发器来导航Google Glass Cards?
This is how I see it happening:
1) "Ok Glass, Start My Program"
2) Application begins and shows the first card
3) User can say "Next Card" to move to the next card
(somewhat the equivalent of swiping forward when in the timeline)
4) User can say "Previous Card" to go back
我需要显示的卡片是简单的文字和图像,我想知道我是否可以在显示卡片时设置某种类型的监听器来监听语音命令。
我已经研究过Glass voice command nearest match from given list但是无法运行代码,尽管我确实拥有所有库。
旁注:使用语音命令时,用户仍然可以看到卡片。他的手也很忙,所以不能选择点击/滑动。
如何仅使用语音控制来控制我的Immersion应用程序中的时间线?非常感谢!
我也在追踪https://code.google.com/p/google-glass-api/issues/detail?id=273。
我正在进行的研究让我回顾Google Glass Developer使用谷歌建议的聆听手势的方式:https://developers.google.com/glass/develop/gdk/input/touch#detecting_gestures_with_a_gesture_detector
我们如何使用语音命令激活这些手势?
Android刚刚为Android http://developer.android.com/wear/notifications/remote-input.html发布了针对beta版发布的可穿戴设备升级,有没有办法可以用它来回答我的问题?它仍然感觉我们仍然只有一步之遥,因为我们可以打电话给服务但是当我们谈话时没有“睡觉”和“醒来”作为后台服务。
答案 0 :(得分:4)
这个东西在onCreate方法中定义
mAudioManager = (AudioManager) context.getSystemService(Context.AUDIO_SERVICE);
// mAudioManager.setStreamSolo(AudioManager.STREAM_VOICE_CALL, true);
sr = SpeechRecognizer.createSpeechRecognizer(context);
sr.setRecognitionListener(new listener(context));
// intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, "en-US");
intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
intent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE,context.getPackageName());
sr.startListening(intent);
Log.i("111111","11111111"+"in");
这个监听器类只需在您的类中添加
class listener implements RecognitionListener
{
Context context1;
public listener(Context context)
{
//Log.i("onError startListening","enter"+"nam");
context1=context;
}
public void onReadyForSpeech(Bundle params)
{
//Log.d(TAG, "onReadyForSpeech");
}
public void onBeginningOfSpeech()
{
//Log.d(TAG, "onBeginningOfSpeech");
}
public void onRmsChanged(float rmsdB)
{
//Log.d(TAG, "onRmsChanged");
}
public void onBufferReceived(byte[] buffer)
{
//Log.d(TAG, "onBufferReceived");
}
public void onEndOfSpeech()
{
//Log.d(TAG, "onEndofSpeech");
sr.startListening(intent);
}
public void onError(int error)
{
//Log.d(TAG, "error " + error);
//7 -No recognition result matched.
//9 - vInsufficient permissions
//6 - No speech input
//8 RecognitionService busy.
//5 Other client side errors.
//3 Audio recording error.
// mText.setText("error " + error);
if(error==6 || error==7 || error==4 || error==1 || error==2 || error==5 || error==3 || error==8 || error==9 )
{
sr.startListening(intent);
//Log.i("onError startListening","onError startListening"+error);
}
}
public void onResults(Bundle results)
{
//Log.v(TAG,"onResults" + results);
ArrayList data = results.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
for (int i = 0; i < data.size(); i++)
{
//Log.d(TAG, "result " + data.get(i));
//str += data.get(i);
//Toast.makeText(context1, "results: "+data.get(0).toString(), Toast.LENGTH_LONG).show();
//Log.v("my", "output"+"results: "+data.get(0).toString());
//sr.startListening(intent);
}
}
public void onPartialResults(Bundle partialResults)
{
//Log.d(TAG, "onPartialResults");
}
public void onEvent(int eventType, Bundle params)
{
//Log.d(TAG, "onEvent " + eventType);
}
}
答案 1 :(得分:2)
我正在详细地写出整个代码,因为我花了这么长时间来完成这项工作......也许它会节省别人宝贵的时间。
此代码是Google开发人员在此处描述的Google语境语音命令的实现:Contextual voice commands
<强> ContextualMenuActivity.java 强>
package com.drace.contextualvoicecommands;
import android.app.Activity;
import android.os.Bundle;
import android.view.Menu;
import android.view.MenuItem;
import com.drace.contextualvoicecommands.R;
import com.google.android.glass.view.WindowUtils;
public class ContextualMenuActivity extends Activity {
@Override
protected void onCreate(Bundle bundle) {
super.onCreate(bundle);
// Requests a voice menu on this activity. As for any other
// window feature, be sure to request this before
// setContentView() is called
getWindow().requestFeature(WindowUtils.FEATURE_VOICE_COMMANDS);
setContentView(R.layout.activity_main);
}
@Override
public boolean onCreatePanelMenu(int featureId, Menu menu) {
if (featureId == WindowUtils.FEATURE_VOICE_COMMANDS) {
getMenuInflater().inflate(R.menu.main, menu);
return true;
}
// Pass through to super to setup touch menu.
return super.onCreatePanelMenu(featureId, menu);
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
getMenuInflater().inflate(R.menu.main, menu);
return true;
}
@Override
public boolean onMenuItemSelected(int featureId, MenuItem item) {
if (featureId == WindowUtils.FEATURE_VOICE_COMMANDS) {
switch (item.getItemId()) {
case R.id.dogs_menu_item:
// handle top-level dogs menu item
break;
case R.id.cats_menu_item:
// handle top-level cats menu item
break;
case R.id.lab_menu_item:
// handle second-level labrador menu item
break;
case R.id.golden_menu_item:
// handle second-level golden menu item
break;
case R.id.calico_menu_item:
// handle second-level calico menu item
break;
case R.id.cheshire_menu_item:
// handle second-level cheshire menu item
break;
default:
return true;
}
return true;
}
// Good practice to pass through to super if not handled
return super.onMenuItemSelected(featureId, item);
}
}
activity_main.xml(布局)
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent" >
<TextView
android:id="@+id/coming_soon"
android:layout_alignParentTop="true"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/voice_command_test"
android:textSize="22sp"
android:layout_marginRight="40px"
android:layout_marginTop="30px"
android:layout_marginLeft="210px" />
</RelativeLayout>
<强>的strings.xml 强>
<resources>
<string name="app_name">Contextual voice commands</string>
<string name="voice_start_command">Voice commands</string>
<string name="voice_command_test">Say "Okay, Glass"</string>
<string name="show_me_dogs">Dogs</string>
<string name="labrador">labrador</string>
<string name="golden">golden</string>
<string name="show_me_cats">Cats</string>
<string name="cheshire">cheshire</string>
<string name="calico">calico</string>
</resources>
<强>的AndroidManifest.xml 强>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.drace.contextualvoicecommands"
android:versionCode="1"
android:versionName="1.0" >
<uses-sdk
android:minSdkVersion="19"
android:targetSdkVersion="19" />
<uses-permission android:name="com.google.android.glass.permission.DEVELOPMENT"/>
<application
android:allowBackup="true"
android:icon="@drawable/ic_launcher"
android:label="@string/app_name" >
<activity
android:name="com.drace.contextualvoicecommands.ContextualMenuActivity"
android:label="@string/app_name" >
<intent-filter>
<action android:name="com.google.android.glass.action.VOICE_TRIGGER" />
</intent-filter>
<meta-data
android:name="com.google.android.glass.VoiceTrigger"
android:resource="@xml/voice_trigger_start" />
</activity>
</application>
</manifest>
经过测试,在Google Glass XE22下运行良好!
答案 2 :(得分:0)
答案 3 :(得分:0)
您可能想尝试GDK中提供的上下文语音命令。虽然它通过菜单临时覆盖屏幕,但它允许仅语音输入。
答案 4 :(得分:0)
我为我的一个应用程序做了类似的事情。它根本不需要ok玻璃屏幕,但用户确实需要提前知道命令。我解释了一下并提供了这个问题的链接:在这里查看我的答案:Glass GDk : Contextual voice commands without the "Ok Glass"
我希望这有帮助!