我有一个导航应用,该导航应用使用AVSpeechUtterance
进行方向语音指示(例如“向左200英尺左转”)。我已经将音量设为1。 speechUtteranceInstance.volume = 1
,但与来自iPhone的音乐或播客相比,其音量仍然非常低,尤其是当声音是通过蓝牙或有线连接(例如通过蓝牙连接到汽车上)时
有什么办法可以增加音量? (我知道这是以前在SO上问过的,但到目前为止,还没有找到适合我的解决方案。)
答案 0 :(得分:5)
经过大量研究和研究,我找到了一个不错的解决方法。
首先,我认为这是一个iOS错误。当所有以下条件都成立时,我发现语音指令本身也被规避了(或至少听起来被规避了),导致语音指令的播放音量与DUCKED音乐的音量相同(因此太柔和而听不到声音)。>
.duckOther
audioSessionCategory 我找到的解决方法是将SpeechUtterance馈送到AVAudioEngine。 只能在iOS13或更高版本上完成,因为这会添加.write method to AVSpeechSynthesizer
简而言之,我使用AVAudioEngine
,AVAudioUnitEQ
和AVAudioPlayerNode
,将AVAudioUnitEQ
的globalGain属性设置为大约10 dB。也有一些怪癖,但可以解决(请参阅代码注释)。
这是完整的代码:
import UIKit
import AVFoundation
import MediaPlayer
class ViewController: UIViewController {
// MARK: AVAudio properties
var engine = AVAudioEngine()
var player = AVAudioPlayerNode()
var eqEffect = AVAudioUnitEQ()
var converter = AVAudioConverter(from: AVAudioFormat(commonFormat: AVAudioCommonFormat.pcmFormatInt16, sampleRate: 22050, channels: 1, interleaved: false)!, to: AVAudioFormat(commonFormat: AVAudioCommonFormat.pcmFormatFloat32, sampleRate: 22050, channels: 1, interleaved: false)!)
let synthesizer = AVSpeechSynthesizer()
var bufferCounter: Int = 0
let audioSession = AVAudioSession.sharedInstance()
override func viewDidLoad() {
super.viewDidLoad()
let outputFormat = AVAudioFormat(commonFormat: AVAudioCommonFormat.pcmFormatFloat32, sampleRate: 22050, channels: 1, interleaved: false)!
setupAudio(format: outputFormat, globalGain: 0)
}
func activateAudioSession() {
do {
try audioSession.setCategory(.playback, mode: .voicePrompt, options: [.mixWithOthers, .duckOthers])
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
} catch {
print("An error has occurred while setting the AVAudioSession.")
}
}
@IBAction func tappedPlayButton(_ sender: Any) {
eqEffect.globalGain = 0
play()
}
@IBAction func tappedPlayLoudButton(_ sender: Any) {
eqEffect.globalGain = 10
play()
}
func play() {
let path = Bundle.main.path(forResource: "voiceStart", ofType: "wav")!
let file = try! AVAudioFile(forReading: URL(fileURLWithPath: path))
self.player.scheduleFile(file, at: nil, completionHandler: nil)
let utterance = AVSpeechUtterance(string: "This is to test if iOS is able to boost the voice output above the 100% limit.")
synthesizer.write(utterance) { buffer in
guard let pcmBuffer = buffer as? AVAudioPCMBuffer, pcmBuffer.frameLength > 0 else {
print("could not create buffer or buffer empty")
return
}
// QUIRCK Need to convert the buffer to different format because AVAudioEngine does not support the format returned from AVSpeechSynthesizer
let convertedBuffer = AVAudioPCMBuffer(pcmFormat: AVAudioFormat(commonFormat: AVAudioCommonFormat.pcmFormatFloat32, sampleRate: pcmBuffer.format.sampleRate, channels: pcmBuffer.format.channelCount, interleaved: false)!, frameCapacity: pcmBuffer.frameCapacity)!
do {
try self.converter!.convert(to: convertedBuffer, from: pcmBuffer)
self.bufferCounter += 1
self.player.scheduleBuffer(convertedBuffer, completionCallbackType: .dataPlayedBack, completionHandler: { (type) -> Void in
DispatchQueue.main.async {
self.bufferCounter -= 1
print(self.bufferCounter)
if self.bufferCounter == 0 {
self.player.stop()
self.engine.stop()
try! self.audioSession.setActive(false, options: [])
}
}
})
self.converter!.reset()
//self.player.prepare(withFrameCount: convertedBuffer.frameLength)
}
catch let error {
print(error.localizedDescription)
}
}
activateAudioSession()
if !self.engine.isRunning {
try! self.engine.start()
}
if !self.player.isPlaying {
self.player.play()
}
}
func setupAudio(format: AVAudioFormat, globalGain: Float) {
// QUIRCK: Connecting the equalizer to the engine somehow starts the shared audioSession, and if that audiosession is not configured with .mixWithOthers and if it's not deactivated afterwards, this will stop any background music that was already playing. So first configure the audio session, then setup the engine and then deactivate the session again.
try? self.audioSession.setCategory(.playback, options: .mixWithOthers)
eqEffect.globalGain = globalGain
engine.attach(player)
engine.attach(eqEffect)
engine.connect(player, to: eqEffect, format: format)
engine.connect(eqEffect, to: engine.mainMixerNode, format: format)
engine.prepare()
try? self.audioSession.setActive(false)
}
}
答案 1 :(得分:1)
尝试一下:
import Speech
try? AVAudioSession.sharedInstance().setCategory(.playback, mode: .default, options: [])
let utterance = AVSpeechUtterance(string: "Hello world")
utterance.voice = AVSpeechSynthesisVoice(language: "en-GB")
let synthesizer = AVSpeechSynthesizer()
synthesizer.speak(utterance)
答案 2 :(得分:0)
文档提到SELECT jsonb_object_agg(elem.key, elem.val)
FROM (jsonb_each(
JSONB '{"foo": true, "bar": 2, "baz": "cat", "other": "Some text"}'
) AS elem(key, val)
JOIN (unnest(
TEXT[] '{foo,other}'
) AS filter(key) USING (key);
的默认值为1.0,这是最响亮的。实际响度基于用户音量设置。如果用户调高了音量,我的语音就不会足够响亮了。
如果用户音量水平低于某个特定水平,也许您可以考虑显示视觉警告。似乎this answer展示了如何通过AVAudioSession做到这一点。
AVAudioSession值得探讨,因为有些设置会影响语音输出...例如您的应用程序的语音中断来自其他应用程序的音频。