每次结果后清除SFSpeechAudioBufferRecognitionRequest的输入(Swift3)

时间:2017-08-14 19:38:09

标签: ios swift swift3 siri sirikit

我已通过此appcoda tutorial将语音整合到文本中。我面临的问题是我希望用户可以自己编写/编辑内容,但SFSpeechAudioBufferRecognitionRequest并没有考虑用户输入的内容。

在SFSpeechAudioBufferRecognitionRequest中发送用户输入的输入的方式是什么,或者在发送新请求之前以任何方式清除SFSpeechAudioBufferRecognitionRequest输入参数。

提前致谢。

1 个答案:

答案 0 :(得分:2)

以下是我用来创建识别请求的内容:

func recordSpeech() throws {
    // Cancel the previous task if it's running.
    if let recognitionTask = recognitionTask {
        recognitionTask.cancel()
        self.recognitionTask = nil
    }

    isRecognizing = true
    self.delegate?.recognitionStarted(sender: self)

    let audioSession = AVAudioSession.sharedInstance()
    try audioSession.setCategory(AVAudioSessionCategoryRecord)
    try audioSession.setMode(AVAudioSessionModeMeasurement)
    try audioSession.setActive(true, with: .notifyOthersOnDeactivation)

    recognitionRequest = SFSpeechAudioBufferRecognitionRequest()

    guard let inputNode = audioEngine.inputNode else {
        print("there was an error in audioEngine.inputNode")
        fatalError("Audio engine has no input node")
    }

    guard let recognitionRequest = recognitionRequest else {
        fatalError("Unable to create a SFSpeechAudioBufferRecognitionRequest object")
    }

    // Configure request so that results are returned before audio recording is finished
    recognitionRequest.shouldReportPartialResults = true

    // A recognition task represents a speech recognition session.
    // We keep a reference to the task so that it can be cancelled.
    recognitionTask = recognizer.recognitionTask(with: recognitionRequest) { result, error in

        func finalizeResult() {
            self.audioEngine.stop()
            inputNode.removeTap(onBus: 0)
            self.recognitionRequest = nil
            self.recognitionTask = nil
        }

        guard error == nil else {
            finalizeResult()
            return
        }

        if !(result?.isFinal)! {

            guard self.isRecognizing else {
                return
            }

                // process partial result
                self.processRecognition(result: result)

            } else {
            finalizeResult()
        }          
    }

    let recordingFormat = inputNode.outputFormat(forBus: 0)
    inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
        self.recognitionRequest?.append(buffer)
    }

    audioEngine.prepare()

    do {
        try audioEngine.start()
    } catch let error as NSError {
        print("audio engine start error=\(error)")
    }
}

要在任何时候取消或停止此操作,请使用以下方法:

@objc func stopRecording() {
    isRecognizing = false
    audioEngine.stop()
    recognitionRequest?.endAudio()
    self.delegate?.recognitionFinished()
}

func cancelRecording() {
    isRecognizing = false
    audioEngine.stop()
    recognitionTask?.cancel()
    self.delegate?.recognitionFinished()
}

我会设置一个按钮来触发语音识别并将其绑定到recordSpeech()。然后设置一个按钮并将其绑定到stopRecording()。当用户停止请求时,result?.isfinal将为true,您知道这是第一个输入的最终文本。然后,用户可以再次使用语音输入进行第二组语音。

我的大部分代码来自2016年WWDC语音识别会议,您可以在此处找到:

Transcript

Video