我正在尝试在iOS Swift应用中实现语音识别。当用户点击“麦克风”按钮时,我正在尝试播放系统声音,然后使用SpeechKit进行语音识别。如果我注释掉SpeechKit代码,语音识别工作正常,声音播放正常。然而,当我把它们放在一起时,我没有声音。此外,在语音识别完成后,我最终没有听到声音。
以下是代码:
@IBAction func listenButtonTapped(sender: UIBarButtonItem) {
let systemSoundID: SystemSoundID = 1113
AudioServicesPlaySystemSound (systemSoundID)
let session = SKSession(URL: NSURL(string: "nmsps://{my Nuance key}@sslsandbox.nmdp.nuancemobility.net:443"), appToken: "{my Nuance token}")
session.recognizeWithType(SKTransactionSpeechTypeDictation,
detection: .Long,
language: "eng-USA",
delegate: self)
}
func transaction(transaction: SKTransaction!, didReceiveRecognition recognition: SKRecognition!) {
var speechString = recognition.text
print(speechString!)
let systemSoundID: SystemSoundID = 1114
AudioServicesPlaySystemSound (systemSoundID)
}
无论哪种方式,语音识别总能正常工作。如果我发表评论,那么系统声音就可以了。
e.g。每次点击按钮时,以下内容都会播放声音:
@IBAction func listenButtonTapped(sender: UIBarButtonItem) {
let systemSoundID: SystemSoundID = 1113
AudioServicesPlaySystemSound (systemSoundID)
}
我尝试过不同的队列但没有成功。我想我需要将SpeechKit代码移动到某种类型的回调或闭包,但我不确定如何构造它。
答案 0 :(得分:0)
此处描述了此问题的解决方案https://developer.apple.com/documentation/audiotoolbox/1405202-audioservicesplayalertsound
SpeechKit将录制类别添加到AVSession,因此不再播放声音。你想要做的是:
let systemSoundID: SystemSoundID = 1113
//Change from record mode to play mode
do {
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback)
try AVAudioSession.sharedInstance().setActive(true)
} catch let error as NSError {
print("Error \(error)")
}
AudioServicesPlaySystemSoundWithCompletion(systemSoundID) {
//do the recognition
}