我正在创建具有语音识别功能的Ionic Angular应用程序(使用@ionic-native/speech-recognition
)。
我想做的是,一旦我从ASR获得结果,就将其发送到服务API调用HTTP / POST(托管聊天机器人)。
我的设置包含HTTP服务以及一个带有文本输入和发送按钮(用于与聊天机器人进行文本输入通信)的页面,以及一个从麦克风开始监听的按钮。然后将响应打印在屏幕上(并在收到新响应后立即更新)。
问题在于,虽然两种方法(文本和语音)都能按预期工作,但文本方法会自动更新响应文本区域,而语音方法却不会(仅在单击任何其他活动组件时才更新)自动更新响应文本区域。
鉴于我必须具有Observable对象(语音识别和HTTP调用),因此我正在使用mergeMap将语音识别与HTTP调用(取决于前者)进行管道传输。
这是组件代码:
ngOnInit() {
this.speechRecognition.hasPermission()
.then((hasPermission: boolean) => {
if (!hasPermission) {
this.speechRecognition.requestPermission()
.then(
() => console.log('Granted'),
() => console.log('Denied')
);
}
});
}
...
startASR() { //when the "speak" button is pressed
this.speechRecognition.startListening().pipe(
mergeMap((matches) => this.botService.ask(matches[0]))
).subscribe((res) => {
this.answer = res['result'];
alert(this.answer); // this is for sanity check: the output here is what is should be...
},
(err) => alert(err));
}
textBot(question: string) { //when the question is typed in the text-iput and the send button is presserd
// Call the service function which returns an Observable
this.botService.ask(question).subscribe((res) => {
this.answer = res['result'];
});
return this.answer;
}
和前端:
<ion-content>
<ion-text color="primary">
<h5>User: </h5>
</ion-text>
<ion-toolbar>
<ion-buttons slot="primary">
<ion-button (click)="botService(question)"> // for text input mode
<ion-icon slot="icon-only" name="chevron-forward-outline"></ion-icon>
</ion-button>
<ion-button (click)="startASR()" > //for speech recognition
<ion-icon slot="icon-only" name="chatbubble-ellipses-outline"></ion-icon>
</ion-button>
</ion-buttons>
<ion-input type="text" [(ngModel)]="question" text-right id="input" ></ion-input> // for text input mode
</ion-toolbar>
<ion-text color="secondary">
<h5>Bot response: </h5>
</ion-text>
<div>
{{ answer }} // This is where the bot's response is being printed and SHOULD be updated using one-way binding.
</div>
</ion-content>
有趣的是,我注意到,如果我用一个虚拟的Observable替换了SpeechRecognition函数,它将按预期工作(即,{{ answer }}
区域已更新为响应...
startASR() { //when the "speak" button is pressed
from([['a']]).pipe(. // random generic Observable
mergeMap((matches) => this.botService.ask(matches[0]))
).subscribe((res) => {
this.answer = res['result'];
alert(this.answer); // this is for sanity check: the output here is what is should be...
},
(err) => alert(err));
}
我在这里想念什么吗?这两个startASR()
函数有什么区别?