iOS:AVSpeechSynthesizer在使用SFSpeechRecognizer

时间:2017-04-26 14:54:50

标签: ios speech-recognition avspeechsynthesizer sfspeechrecognizer

我正在制作一个文本到语音和语音到文本的应用程序。

我现在遇到的问题是使用AVSpeechSynthesizer,文本到语音的工作正常。但是在我使用SFSpeechRecognizer记录并进行语音到文本之后,文本到语音停止工作(即,没有回话)。

我也是新手。但是我从几个不同的教程中获得了这些代码,并尝试将它们合并在一起。

这是我的代码:

 private var speechRecognizer = SFSpeechRecognizer(locale: Locale.init(identifier: "en-US"))!
 private var recognitionRequest: SFSpeechAudioBufferRecognitionRequest?
 private var recognitionTask: SFSpeechRecognitionTask?
 private var audioEngine = AVAudioEngine()

    @objc(speak:location:date:callback:)
    func speak(name: String, location: String, date: NSNumber,_ callback: @escaping (NSObject) -> ()) -> Void {
      let utterance = AVSpeechUtterance(string: name)
      let synthesizer = AVSpeechSynthesizer()
      synthesizer.speak(utterance)
    }


    @available(iOS 10.0, *)
    @objc(startListening:location:date:callback:)
    func startListening(name: String, location: String, date: NSNumber,_ callback: @escaping (NSObject) -> ()) -> Void {
        if audioEngine.isRunning {
            audioEngine.stop()
            recognitionRequest?.endAudio()


        } else {

            if recognitionTask != nil {  //1
                recognitionTask?.cancel()
                recognitionTask = nil
            }

            let audioSession = AVAudioSession.sharedInstance()  //2
            do {
                try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
                try audioSession.setMode(AVAudioSessionModeMeasurement)
                try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
            } catch {
                print("audioSession properties weren't set because of an error.")
            }

            recognitionRequest = SFSpeechAudioBufferRecognitionRequest()  //3

            guard let inputNode = audioEngine.inputNode else {
                fatalError("Audio engine has no input node")
            }  //4

            guard let recognitionRequest = recognitionRequest else {
                fatalError("Unable to create an SFSpeechAudioBufferRecognitionRequest object")
            } //5

            recognitionRequest.shouldReportPartialResults = true  //6

            recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in  //7

                var isFinal = false  //8

                if result != nil {

                    print(result?.bestTranscription.formattedString)  //9
                    isFinal = (result?.isFinal)!
                }

                if error != nil || isFinal {  //10
                    self.audioEngine.stop()
                    inputNode.removeTap(onBus: 0)

                    self.recognitionRequest = nil
                    self.recognitionTask = nil


                }
            })

            let recordingFormat = inputNode.outputFormat(forBus: 0)  //11
            inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
                self.recognitionRequest?.append(buffer)
            }

            audioEngine.prepare()  //12

            do {
                try audioEngine.start()
            } catch {
                print("audioEngine couldn't start because of an error.")
            }




        }

    }

1 个答案:

答案 0 :(得分:2)

他们都有AVAudioSession。

对于AVSpeechSynthesizer,我想它必须设置为:

_audioSession.SetCategory(AVAudioSessionCategory.Playback, 
AVAudioSessionCategoryOptions.MixWithOthers);

和SFSpeechRecognizer:

_audioSession.SetCategory(AVAudioSessionCategory.PlayAndRecord, 
AVAudioSessionCategoryOptions.MixWithOthers);

希望它有所帮助。