我的编程能力很有限,所以我事先表示歉意。
我正在尝试通过流音频接收DialogFlow的意图。我正在用麦克风对其进行测试。
我引用了以下Google示例代码。
Microphone Streaming Audio for Google STT
Intent Detection for Google DialogFlow
两者都能正常工作,但是当我尝试将两个示例代码结合在一起时,会出现以下错误。
No handlers could be found for logger "grpc._channel"
Traceback (most recent call last):
File "detect_intent_stream.py", line 181, in <module> detect_intent_stream(project_id, session_id, language_code)
File "detect_intent_stream.py", line 162, in detect_intent_stream for response in responses:
File "C:\Python27\lib\site-packages\google\api_core\grpc_helpers.py", line 83, in next six.raise_from(exceptions.from_grpc_error(exc), exc)
File "C:\Python27\lib\site-packages\six.py", line 737, in raise_from raise value
google.api_core.exceptions.Unknown: None Exception iterating requests!
我正在寻找解决方案,因此遇到了这篇文章。但是我不确定如何执行所提供的建议。
Intermediate results on using session_client.streaming_detect_intent()
下面是我目前的代码。
def detect_intent_stream(project_id, session_id, language_code):
import dialogflow_v2 as dialogflow
session_client = dialogflow.SessionsClient()
audio_encoding = dialogflow.enums.AudioEncoding.AUDIO_ENCODING_LINEAR_16
sample_rate_hertz = 8000
session_path = session_client.session_path(project_id, session_id)
def request_generator(audio_config):
query_input = dialogflow.types.QueryInput(audio_config=audio_config)
yield dialogflow.types.StreamingDetectIntentRequest(session=session_path, query_input=query_input, single_utterance=True)
with MicrophoneStream(RATE, CHUNK) as stream:
#while True:
#Temp condition
while dialogflow.types.StreamingRecognitionResult().is_final == False:
audio_generated = stream.generator()
#Temp condition
if not audio_generated:
break
yield dialogflow.types.StreamingDetectIntentRequest(input_audio=audio_generated)
audio_config = dialogflow.types.InputAudioConfig(audio_encoding=audio_encoding, language_code=language_code, sample_rate_hertz=sample_rate_hertz)
requests = request_generator(audio_config)
responses = session_client.streaming_detect_intent(requests)
print('=' * 20)
for response in responses:
print('Intermediate transcript: "{}".'.format(response.recognition_result.transcript)).encode('utf-8')
query_result = response.query_result
print('=' * 20)
print('Query text: {}'.format(query_result.query_text))
print('Detected intent: {} (confidence: {})\n'.format(
query_result.intent.display_name,
query_result.intent_detection_confidence))
print('Fulfillment text: {}\n'.format(
query_result.fulfillment_text))
编辑:我已经固定了参考代码。
答案 0 :(得分:0)
解决了!
所以问题是STT函数StreamingRecognizeRequest和DF函数StreamingDetectIntentRequest接受了不同的参数。
STT函数将生成器作为参数,而DF函数将实际缓冲区作为参数。