我正在尝试使用SpeechSynthesis和SpeechRecognition创建聊天机器人的迷你版本。基本上我想要的是开始文字转换。完成此操作后,我想听用户说(语音到文本),然后再向用户说文本。这是我的代码:
speak("Say something");
var spokenWord=hear();
speak(spokenWord);
function speak(message) {
var synth = window.speechSynthesis;
var utterThis = new SpeechSynthesisUtterance(message);
synth.speak(utterThis);
utterThis.onend = function (event) {
console.log('Utterance has finished being spoken after ' + event.elapsedTime + ' milliseconds.');
}
}
function hear() {
var SpeechRecognition = window.SpeechRecognition || window.webkitSpeechRecognition;
var recognition = new SpeechRecognition();
recognition.start();
recognition.onresult = function (event) {
var current = event.resultIndex;
var transcript = event.results[current][0].transcript;
console.log(transcript);
recognition.stop();
return transcript;
}
}
由于这些方法是异步的,因此无法按我期望的方式工作。第二听讲在听完之前开始。有什么建议可以解决这个问题吗?
答案 0 :(得分:0)
async function waitUntil(check: () => boolean, intervalPeriodMs = 300): Promise<void> {
if (check()) {
return
}
return new Promise((resolve, reject) => {
const wait = setInterval(() => {
try {
if (check()) {
clearInterval(wait)
resolve()
}
} catch (err) {
clearInterval(wait)
reject(err)
}
}, intervalPeriodMs)
})
}
await waitUntil(() => !window.speechSynthesis.speaking, 300)
答案 1 :(得分:0)
就我而言,window.speechSynthesis.speaking
在演讲结束后很长时间内仍然是真实的。我注意到这只是带有 localService: true
的声音的问题。因此,如果您可以避免这些 - 您的 onend
将被调用。