我正在尝试在对话框机器人中添加使用直接语音通道的功能。我正在阅读Microsoft的有关如何执行此操作的教程,但他们只是使用echo bot。我希望能够使用对话机器人并返回声音。我已经在azure中创建了语音资源,并在azure中的bot资源中实现了直接语音通道。有没有人成功地向对话机器人添加语音?我读到会有语音提示选项,但是我在PromptOptions对象中找不到该属性。
答案 0 :(得分:0)
语音的配置方式取决于您要使用的类型,这还意味着可能会更新机器人以及您正在使用的客户端。
有关客户(即渠道)的快速说明-渠道是决定是否支持语音的渠道。例如:
关于DL语音,您需要添加/更新机器人的index.js
代码以包括以下内容:
[...]
// Catch-all for errors.
const onTurnErrorHandler = async (context, error) => {
// This check writes out errors to console log .vs. app insights.
// NOTE: In production environment, you should consider logging this to Azure
// application insights. See https://aka.ms/bottelemetry for telemetry
// configuration instructions.
console.error(`\n [onTurnError] unhandled error: ${ error }`);
// Send a trace activity, which will be displayed in Bot Framework Emulator
await context.sendTraceActivity(
'OnTurnError Trace',
`${ error }`,
'https://www.botframework.com/schemas/error',
'TurnError'
);
// Send a message to the user
await context.sendActivity('The bot encountered an error or bug.');
await context.sendActivity('To continue to run this bot, please fix the bot source code.');
};
// Set the onTurnError for the singleton BotFrameworkAdapter.
adapter.onTurnError = onTurnErrorHandler;
[...]
// Listen for Upgrade requests for Streaming.
server.on('upgrade', (req, socket, head) => {
// Create an adapter scoped to this WebSocket connection to allow storing session data.
const streamingAdapter = new BotFrameworkAdapter({
appId: process.env.MicrosoftAppId,
appPassword: process.env.MicrosoftAppPassword
});
// Set onTurnError for the BotFrameworkAdapter created for each connection.
streamingAdapter.onTurnError = onTurnErrorHandler;
streamingAdapter.useWebSocket(req, socket, head, async (context) => {
// After connecting via WebSocket, run this logic for every request sent over
// the WebSocket connection.
await myBot.run(context);
});
});
然后,在Web聊天中,您需要输入以下内容。 (您可以在此DL语音sample中引用以下代码。此外,请注意,您需要将“获取”地址更新为自己的API以生成令牌。):
[...]
const fetchCredentials = async () => {
const res = await fetch('https://webchat-mockbot-streaming.azurewebsites.net/speechservices/token', {
method: 'POST'
});
if (!res.ok) {
throw new Error('Failed to fetch authorization token and region.');
}
const { region, token: authorizationToken } = await res.json();
return { authorizationToken, region };
};
// Create a set of adapters for Web Chat to use with Direct Line Speech channel.
const adapters = await window.WebChat.createDirectLineSpeechAdapters({
fetchCredentials
});
// Pass the set of adapters to Web Chat.
window.WebChat.renderWebChat(
{
...adapters
},
document.getElementById('webchat')
);
[...]
这里有一些其他资源可以帮助您更好地理解DL语音:
关于CS语音,您需要具有有效的Cognitive Services subscription。在Azure中设置语音服务后,即可使用订阅密钥生成用于启用CS语音的令牌(您也可以引用此网络聊天sample。启用该漫游器无需更改机器人。(再次,您将需要设置一个用于生成令牌的API,因为最佳实践是不在HTML中包含任何键。这是我在此示例中为获取DL令牌所做的操作:
let authorizationToken;
let region = '<<SPEECH SERVICES REGION>>';
const response = await fetch( `https://${ region }.api.cognitive.microsoft.com/sts/v1.0/issueToken`, {
method: 'POST',
headers: {
'Ocp-Apim-Subscription-Key': '<<SUBSCRIPTION KEY>>'
}
} );
if ( response.status === 200 ) {
authorizationToken = await response.text(),
region
} else {
console.log( 'error' )
}
const webSpeechPonyfillFactory = await window.WebChat.createCognitiveServicesSpeechServicesPonyfillFactory( {
authorizationToken,
region
} );
const res = await fetch( 'http://localhost:3500/directline/token', { method: 'POST' } );
const { token } = await res.json();
window.WebChat.renderWebChat(
{
directLine: window.WebChat.createDirectLine( {
token: token
} ),
webSpeechPonyfillFactory: webSpeechPonyfillFactory,
},
document.getElementById( 'webchat' )
);
其他资源:
希望有帮助!