使用云功能仿真器在本地测试Dialogflow的实现

时间:2018-10-19 18:40:44

标签: node.js google-cloud-functions dialogflow actions-on-google

是否可以使用云功能仿真器在本地测试Dialogflow实现Webhook,如果可以,我应该如何格式化请求?

我已经阅读了所有可以找到的文档,包括位于https://firebase.google.com/docs/functions/local-emulator的指南,尤其令人感兴趣的是之前的这个问题,似乎也有相似之处: Unit test Actions on Google Dialogflow locally

我能够使用函数shell调用我的实现函数,但是,无论我如何尝试格式化主体,我似乎都只能触发后备意图或错误捕获意图。

在给定输入“ hello”的情况下,我可以在Google模拟器上的操作上验证我的Webhook是否成功使用默认的欢迎意图进行响应,但是当使用相同的请求JSON数据作为本地函数的输入时,我将被定向到后备意图。

是不是函数仿真器无法在本地执行适当的意图匹配,因此总是触发后备意图,还是我只是没有正确格式化请求?任何帮助将不胜感激!

这是我正在使用的调用格式,以及来自shell的响应:

firebase > fulfillment({method: 'POST',json: true,body: 
require("project/collabrec/testData.json")});
Sent request to function.
firebase > info: User function triggered, starting execution
info: Fallback intent triggered.
info: Execution took 15 ms, user function completed successfully

RESPONSE RECEIVED FROM FUNCTION: 200, {
  "payload": {
    "google": {
      "expectUserResponse": true,
      "richResponse": {
        "items": [
          {
            "simpleResponse": {
              "textToSpeech": "I didn't quite catch that. Could you say that again?"
            }
          }
        ]
      }
    }
  }
}

这是testData.json的内容:

{
  "user": {
    "userId": "ABwppHFR0lfRsG_UM3NkvAptIkD2iUpIUNxFt-ia05PFuPajV6kRQKXu_H_ECMMe0lP_WcCsK64sH2MEIg8eqA",
    "locale": "en-US",
    "lastSeen": "2018-10-19T15:20:12Z"
  },
  "conversation": {
    "conversationId": "ABwppHHerN4CIsBZiWg7M3Tq6NwlTWkfN-_zLIIOBcKbeaz4ruymv-nZ4TKr6ExzDv1tOzszsfcgXikgqRJ9gg",
    "type": "ACTIVE",
    "conversationToken": "[]"
  },
  "inputs": [
    {
      "intent": "actions.intent.TEXT",
      "rawInputs": [
        {
          "inputType": "KEYBOARD",
          "query": "hello"
        }
      ],
      "arguments": [
        {
          "name": "text",
          "rawText": "hello",
          "textValue": "hello"
        }
      ]
    }
  ],
  "surface": {
    "capabilities": [
      {
        "name": "actions.capability.MEDIA_RESPONSE_AUDIO"
      },
      {
        "name": "actions.capability.SCREEN_OUTPUT"
      },
      {
        "name": "actions.capability.AUDIO_OUTPUT"
      },
      {
        "name": "actions.capability.WEB_BROWSER"
      }
    ]
  },
  "isInSandbox": true,
  "availableSurfaces": [
    {
      "capabilities": [
        {
          "name": "actions.capability.SCREEN_OUTPUT"
        },
        {
          "name": "actions.capability.AUDIO_OUTPUT"
        },
        {
          "name": "actions.capability.WEB_BROWSER"
        }
      ]
    }
  ],
  "requestType": "SIMULATOR"
}

这是我的云函数webhook:

const {dialogflow, Image} = require('actions-on-google');
const admin = require('firebase-admin');
const functions = require('firebase-functions');
const app = dialogflow();


app.catch((conv, error) => {
  console.log("Error intent triggered.")
  console.error(error);
  conv.ask('Sorry, I ran into an error. Please try that again.');
});

app.fallback((conv) => {
  console.log("Fallback intent triggered.")
  conv.ask("I didn't quite catch that. Could you say that again?");
})

app.intent('Default Welcome Intent', (conv) => {
  console.log("Welcome intent triggered.")
    conv.ask("Welcome!!");
});

exports.fulfillment = functions.region('europe-west1').https.onRequest(app);

使用Node v8.1.4和软件包版本:

"@google-cloud/common-grpc": "^0.9.0",
"@google-cloud/firestore": "^0.17.0",
"@google-cloud/functions-emulator": "^1.0.0-beta.5",
"actions-on-google": "^2.4.1",
"firebase-admin": "^6.0.0",
"firebase-functions": "^2.0.5"

1 个答案:

答案 0 :(得分:2)

问题是您使用的是来自AoG Simulator的JSON,但这显示了AoG正在发送给Dialogflow的JSON。 Dialogflow对此进行处理,并向您的Webhook发送不同的JSON,其中包括处理AoG JSON并确定意图,参数和其他信息的结果。

您正在执行的操作应该可以工作-如果您拥有Dialogflow JSON。您有两种方法可以做到这一点:

  • 最直接的方法是将Webhook运行在可以从Dialogflow接收POST的地方,并查看conv.request对象,该对象应该能够为您提供所需的JSON。

  • 如果您正在本地开发机上运行webhook(如您建议的那样),我倾向于启动ngrok隧道。隧道提供了一个公共HTTPS服务器,该服务器非常有用,并且具有给我一个控制台的副作用,该控制台可用于准确查看请求和响应JSON的内容。

  • 最后,您应该能够进入Dialogflow中的项目设置并打开Cloud Logging。该日志包含发送到您的Webhook的请求以及从中获得的响应。