将已解析的意图和插槽从Amazon-Lex发送回客户端

时间:2018-07-13 07:43:16

标签: amazon-lex

Amazon Lex FAQ提到我们可以将已解析的意图和插槽发送回客户端,以便我们可以将业务逻辑放置在客户端中。但是无法在Lex文档中找到与此相关的任何明确信息。

我的用例: 将文本/语音数据发送到Amazon lex,然后lex解析意图和插槽,然后将包含意图,插槽和上下文数据的JSON发送回请求它的客户端,而不是将其发送到Lambda或其他后端API端点。

有人可以为此指出正确的方法/配置吗?

致谢

1 个答案:

答案 0 :(得分:1)

如果我对您的理解正确,那么您希望您的客户端接收LexResponse并在客户端内处理它,而不是通过Lambda或后端API。如果正确,则可以尝试以下Lex-Audio的实现。

// This will handle the event when the mic button is clicked on your UI.
scope.audioClick = function () {

        // Cognito Credentials for Lex Runtime Service
        AWS.config.credentials = new AWS.CognitoIdentityCredentials(
            { IdentityPoolId: Settings.AWSIdentityPool }, 
            { region: Settings.AWSRegion }
        );

        AWS.config.region = Settings.AWSRegion;

        config = {
            lexConfig: { botName: Settings.BotName }
        };
        conversation = new LexAudio.conversation(config, function (state) {

            scope.$apply(function () {
                if (state === "Passive") {
                    scope.placeholder = Settings.PlaceholderWithMic;
                }
                else {
                    scope.placeholder = state + "...";
                }
            });

        }, chatbotSuccess
            , function (error) {
               audTextContent = error;
            }, function (timeDomain, bufferLength) {
            });
        conversation.advanceConversation();
    };

Lex响应后调用的成功函数如下:

chatbotSuccess = function (data) { 
       var intent = data.intent;
       var slots = data.slots;

       // Do what you need with this data
    };

希望这使您对需要做的事情有所了解。如果您需要Lex-Audio的参考,那么在Amazon Blog上有一篇很好的文章可以参考: https://aws.amazon.com/blogs/machine-learning/capturing-voice-input-in-a-browser/