我正在关注iOS 10语音识别API(https://code.tutsplus.com/tutorials/using-the-speech-recognition-api-in-ios-10--cms-28032?ec_unit=translation-info-language)的一些教程 而我的版本不起作用。语音输入没有文本响应。 我遵循了本教程,但是我必须进行一些更改(显然,较新版本的Swift并不完全接受本教程中的某些代码行)。 你们能给我一些关于如何以及为什么不起作用的想法吗?
这是我正在运行的方法:
func startRecording() {
// Setup audio engine and speech recognizer
let node = audioEngine.inputNode
let recordingFormat = node.outputFormat(forBus: 0)
node.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { buffer, _ in
self.request.append(buffer)
}
// Prepare and start recording
audioEngine.prepare()
do {
try audioEngine.start()
self.status = .recognizing
} catch {
return print(error)
}
// Analyze the speech
recognitionTask = speechRecognizer?.recognitionTask(with: request, resultHandler: { result, error in
if let result = result {
self.tview.text = result.bestTranscription.formattedString
NSLog(result.bestTranscription.formattedString)
} else if let error = error {
print(error)
NSLog(error.localizedDescription)
}
})
}
调试时,speechRecognizer或RecognitionTask都没有nil值。
这是我在ViewController上定义变量的方式:
let audioEngine = AVAudioEngine()
let speechRecognizer: SFSpeechRecognizer? = SFSpeechRecognizer()
let request = SFSpeechAudioBufferRecognitionRequest()
var recognitionTask: SFSpeechRecognitionTask?
工作设置:在2017 iPad,iOS 11.4上测试。 Xcode 9.4.1,Swift 4.1。
谢谢!
答案 0 :(得分:1)
由于未将AVAudioSession
设置为Record
,导致此问题。试试这个。
在View Controller中添加
let audioSession = AVAudioSession.sharedInstance()
您的最终方法将是。
func startRecording() {
//Change / Edit Start
do {
try audioSession.setCategory(AVAudioSessionCategoryRecord)
try audioSession.setMode(AVAudioSessionModeMeasurement)
try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
} catch {
print("audioSession properties weren't set because of an error.")
}
//Change / Edit Finished
// Setup audio engine and speech recognizer
let node = audioEngine.inputNode
let recordingFormat = node.outputFormat(forBus: 0)
node.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { buffer, _ in
self.request.append(buffer)
}
// Prepare and start recording
audioEngine.prepare()
do {
try audioEngine.start()
self.status = .recognizing
} catch {
return print(error)
}
// Analyze the speech
recognitionTask = speechRecognizer?.recognitionTask(with: request, resultHandler: { result, error in
if let result = result {
self.tview.text = result.bestTranscription.formattedString
NSLog(result.bestTranscription.formattedString)
} else if let error = error {
print(error)
NSLog(error.localizedDescription)
}
})
}
答案 1 :(得分:0)
在现有代码中添加以下内容:
recognitionTask = speechRecognizer?.recognitionTask(with: request, resultHandler: { result, error in
if let result = result, result.isFinal {
self.tview.text = result.bestTranscription.formattedString
NSLog(result.bestTranscription.formattedString)
} else if let error = error {
print(error)
NSLog(error.localizedDescription)
}
})