使用SFSpeechRecognizer后,AVSpeechSynthesizer不会说话

时间:2016-10-26 19:33:08

标签: ios iphone avspeechsynthesizer sfspeechrecognizer

所以我构建了一个使用SFSpeechRecognizer进行语音识别的简单应用程序,并在屏幕上的UITextView中将转换后的语音显示为文本。现在我正试图让手机说出显示的​​文字。它由于某种原因不起作用。 AVSpeechSynthesizer说功能仅在使用SFSpeechRecognizer之前有效。例如,当应用程序启动时,它会在UITextView中显示一些欢迎文本,如果我点击“说出”按钮,手机会说出欢迎文本。然后,如果我进行录音(用于语音识别),识别的语音将显示在UITextView中。现在我希望手机能够说出那段文字,但不幸的是它并没有。

这是代码

import UIKit
import Speech
import AVFoundation


class ViewController: UIViewController, SFSpeechRecognizerDelegate, AVSpeechSynthesizerDelegate {

    @IBOutlet weak var textView: UITextView!
    @IBOutlet weak var microphoneButton: UIButton!

    private let speechRecognizer = SFSpeechRecognizer(locale: Locale.init(identifier: "en-US"))!

    private var recognitionRequest: SFSpeechAudioBufferRecognitionRequest?
    private var recognitionTask: SFSpeechRecognitionTask?
    private let audioEngine = AVAudioEngine()

    override func viewDidLoad() {
        super.viewDidLoad()

        microphoneButton.isEnabled = false

        speechRecognizer.delegate = self

        SFSpeechRecognizer.requestAuthorization { (authStatus) in

            var isButtonEnabled = false

            switch authStatus {
            case .authorized:
                isButtonEnabled = true

            case .denied:
                isButtonEnabled = false
                print("User denied access to speech recognition")

            case .restricted:
                isButtonEnabled = false
                print("Speech recognition restricted on this device")

            case .notDetermined:
                isButtonEnabled = false
                print("Speech recognition not yet authorized")
            }

            OperationQueue.main.addOperation() {
                self.microphoneButton.isEnabled = isButtonEnabled
            }
        }
    }

    @IBAction func speakTapped(_ sender: UIButton) {
        let string = self.textView.text
        let utterance = AVSpeechUtterance(string: string!)
        let synthesizer = AVSpeechSynthesizer()
        synthesizer.delegate = self
        synthesizer.speak(utterance)
    }
    @IBAction func microphoneTapped(_ sender: AnyObject) {
        if audioEngine.isRunning {
            audioEngine.stop()
            recognitionRequest?.endAudio()
            microphoneButton.isEnabled = false
            microphoneButton.setTitle("Start Recording", for: .normal)
        } else {
            startRecording()
            microphoneButton.setTitle("Stop Recording", for: .normal)
        }
    }

    func startRecording() {

        if recognitionTask != nil {  //1
            recognitionTask?.cancel()
            recognitionTask = nil
        }

        let audioSession = AVAudioSession.sharedInstance()  //2
        do {
            try audioSession.setCategory(AVAudioSessionCategoryRecord)
            try audioSession.setMode(AVAudioSessionModeMeasurement)
            try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
        } catch {
            print("audioSession properties weren't set because of an error.")
        }

        recognitionRequest = SFSpeechAudioBufferRecognitionRequest()  //3

        guard let inputNode = audioEngine.inputNode else {
            fatalError("Audio engine has no input node")
        }  //4

        guard let recognitionRequest = recognitionRequest else {
            fatalError("Unable to create an SFSpeechAudioBufferRecognitionRequest object")
        } //5

        recognitionRequest.shouldReportPartialResults = true  //6

        recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in  //7

            var isFinal = false  //8

            if result != nil {

                self.textView.text = result?.bestTranscription.formattedString  //9
                isFinal = (result?.isFinal)!
            }

            if error != nil || isFinal {  //10
                self.audioEngine.stop()
                inputNode.removeTap(onBus: 0)

                self.recognitionRequest = nil
                self.recognitionTask = nil

                self.microphoneButton.isEnabled = true
            }
        })

        let recordingFormat = inputNode.outputFormat(forBus: 0)  //11
        inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
            self.recognitionRequest?.append(buffer)
        }

        audioEngine.prepare()  //12

        do {
            try audioEngine.start()
        } catch {
            print("audioEngine couldn't start because of an error.")
        }

        textView.text = "Say something, I'm listening!"

    }

    func speechRecognizer(_ speechRecognizer: SFSpeechRecognizer, availabilityDidChange available: Bool) {
        if available {
            microphoneButton.isEnabled = true
        } else {
            microphoneButton.isEnabled = false
        }
    }
}

5 个答案:

答案 0 :(得分:13)

请使用以下代码解决问题:

let audioSession = AVAudioSession.sharedInstance()  
            do {

                try audioSession.setCategory(AVAudioSessionCategoryPlayback)
                try audioSession.setMode(AVAudioSessionModeDefault)

            } catch {
                print("audioSession properties weren't set because of an error.")
            }

Here, we have to use  the above code in the following way:

 @IBAction func microphoneTapped(_ sender: AnyObject) {

        if audioEngine.isRunning {
            audioEngine.stop()
            recognitionRequest?.endAudio()
           let audioSession = AVAudioSession.sharedInstance()  
            do {

                try audioSession.setCategory(AVAudioSessionCategoryPlayback)
                try audioSession.setMode(AVAudioSessionModeDefault)

            } catch {
                print("audioSession properties weren't set because of an error.")
            }

            microphoneButton.isEnabled = false
            microphoneButton.setTitle("Start Recording", for: .normal)
        } else {
            startRecording()
            microphoneButton.setTitle("Stop Recording", for: .normal)
        }
    }

此处,停止 audioengine 后,我们将 audioSession 类别设置为 AVAudioSessionCategoryPlayback audioSession模式 AVAudioSessionModeDefault 。然后,当您调用下一个文本到语音方法时,它将正常工作。

答案 1 :(得分:11)

您应该更改startRecording方法的这一行:

try audioSession.setCategory(AVAudioSessionCategoryRecord)            

到:

try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord)

答案 2 :(得分:8)

问题是,当您开始语音识别时,您已将音频会话类别设置为记录。您无法使用Record的音频会话播放任何音频(包括语音合成)。

答案 3 :(得分:1)

试试这个:

audioSession.setCategory(AVAudioSessionCategoryRecord) 

答案 4 :(得分:1)

使用STT时,你必须这样设置:

AVAudioSession *avAudioSession = [AVAudioSession sharedInstance];

if (avAudioSession) {
    [avAudioSession setCategory:AVAudioSessionCategoryRecord error:nil];
    [avAudioSession setMode:AVAudioSessionModeMeasurement error:nil];
    [avAudioSession setActive:true withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:nil];
}

当再次使用TTS设置AudioSession时,如下所示:

[regRequest endAudio];

AVAudioSession *avAudioSession = [AVAudioSession sharedInstance];
if (avAudioSession) {
    [avAudioSession setCategory:AVAudioSessionCategoryPlayback error:nil];
    [avAudioSession setMode:AVAudioSessionModeDefault error:nil];
}

它的作品非常适合我。 LOW AUDIO问题也解决了。