斯威夫特:当试图快速将语音转换为iPhone的语音时,iPhone的音量很低

时间:2017-07-28 11:08:06

标签: ios swift avaudioplayer avspeechsynthesizer sfspeechrecognizer

我正在尝试语音识别样本。如果我开始通过麦克风识别我的演讲,那么我试图让iPhone听到那个公认的文字。这是工作。但是,声音太低了。你可以指导我吗?

而不是,如果我尝试使用AVSpeechUtterance代码进行简单的按钮操作,则音量正常。

之后,如果我选择startRecognise()方法,则音量太低。

我的代码

func startRecognise()
{
let audioSession = AVAudioSession.sharedInstance()  //2
    do
    {
        try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
        try audioSession.setMode(AVAudioSessionModeDefault)
        try audioSession.setMode(AVAudioSessionModeMeasurement)
        try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
        try AVAudioSession.sharedInstance().overrideOutputAudioPort(AVAudioSessionPortOverride.speaker)
    }
    catch
    {
        print("audioSession properties weren't set because of an error.")
    }
    recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
    guard let inputNode = audioEngine.inputNode else {
        fatalError("Audio engine has no input node")
    }
    guard let recognitionRequest = recognitionRequest else {
        fatalError("Unable to create an SFSpeechAudioBufferRecognitionRequest object")
    }
    recognitionRequest.shouldReportPartialResults = true
    recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in
        if result != nil
        {
            let lastword = result?.bestTranscription.formattedString.components(separatedBy: " ").last
            if lastword == "repeat" || lastword == "Repeat"{
                self.myUtterance2 = AVSpeechUtterance(string: "You have spoken repeat")
                self.myUtterance2.rate = 0.4
                self.myUtterance2.volume = 1.0
                self.myUtterance2.pitchMultiplier = 1.0
                self.synth1.speak(self.myUtterance2)
                // HERE VOICE IS TOO LOW. 
            }
        }
    })
    let recordingFormat = inputNode.outputFormat(forBus: 0)  //11
    inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
    self.recognitionRequest?.append(buffer)
    }
    audioEngine.prepare()
    do 
    {
        try audioEngine.start()
    } 
    catch 
    {
        print("audioEngine couldn't start because of an error.")
    }
}

我的按钮操作

func buttonAction()
{
   self.myUtterance2 = AVSpeechUtterance(string: "You are in button action")
   self.myUtterance2.rate = 0.4
   self.myUtterance2.volume = 1.0
   self.myUtterance2.pitchMultiplier = 1.0
   self.synth1.speak(self.myUtterance2)
   // Before going for startRecognise() method, 
   //I tried with buttonAction(), 
   //this time volume is normal. 
   //After startRecognise() method call, volume is too low in both methods.
}

2 个答案:

答案 0 :(得分:14)

最后,我得到了解决方案。

func startRecognise()
{
let audioSession = AVAudioSession.sharedInstance()  //2
    do
    {
        try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
        try audioSession.setMode(AVAudioSessionModeDefault)
        //try audioSession.setMode(AVAudioSessionModeMeasurement)
        try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
        try AVAudioSession.sharedInstance().overrideOutputAudioPort(AVAudioSessionPortOverride.speaker)
    }
    catch
    {
        print("audioSession properties weren't set because of an error.")
    }

    ... 
}

我对此行try audioSession.setMode(AVAudioSessionModeMeasurement)发表评论后,音量正常。

答案 1 :(得分:0)

在深入研究其技术细节之后,overrideOutputAudioPort()暂时更改了当前的音频路由。

func overrideOutputAudioPort(_ portOverride: AVAudioSession.PortOverride) throws

如果您的应用使用playAndRecord类别,则无论其他设置如何,使用AVAudioSession.PortOverride.speaker选项调用此方法都会导致音频路由到built-in speakermicrophone。 / p>

此更改仅在当前路由更改之前有效,或者您使用AVAudioSession.PortOverride.none选项再次调用此方法。

try audioSession.setMode(AVAudioSessionModeDefault)

如果您更喜欢permanently enable这种行为,则应该设置类别的defaultToSpeaker选项。如果没有使用耳机等其他配件,则设置此选项将始终将其路由到扬声器,而不是接收器。

在上面的 Swift 5.x 中,代码类似于-

let audioSession = AVAudioSession.sharedInstance()
do {
  try audioSession.setCategory(.playAndRecord)
  try audioSession.setMode(.default)
  try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
  try audioSession.overrideOutputAudioPort(.speaker)
} catch {
  debugPrint("Enable to start audio engine")
  return
}

通过将模式设置为measurement,它负责使对输入和输出信号的system-supplied signal处理量最小化。

try audioSession.setMode(.measurement)

通过注释此模式并使用default模式负责permanently enablingbuilt-in speakermicrophone的音频路由。

感谢@ McDonal_11为您解答。希望这将有助于了解技术细节。