我正在努力执行续约。在iOS 10 beta上使用AVCapture
进行语音识别。我已设置captureOutput(...)
以继续获取CMSampleBuffers
。我将这些缓冲区直接放入我之前设置的SFSpeechAudioBufferRecognitionRequest
中:
... do some setup
SFSpeechRecognizer.requestAuthorization { authStatus in
if authStatus == SFSpeechRecognizerAuthorizationStatus.authorized {
self.m_recognizer = SFSpeechRecognizer()
self.m_recognRequest = SFSpeechAudioBufferRecognitionRequest()
self.m_recognRequest?.shouldReportPartialResults = false
self.m_isRecording = true
} else {
print("not authorized")
}
}
.... do further setup
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
if(!m_AV_initialized) {
print("captureOutput(...): not initialized !")
return
}
if(!m_isRecording) {
return
}
let formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer)
let mediaType = CMFormatDescriptionGetMediaType(formatDesc!)
if (mediaType == kCMMediaType_Audio) {
// process audio here
m_recognRequest?.appendAudioSampleBuffer(sampleBuffer)
}
return
}
整件事情都会持续几秒钟。然后不再调用captureOutput。如果我注释掉appendAudioSampleBuffer(sampleBuffer)行,那么只要app运行(正如预期的那样)就会调用captureOutput。显然,将样本缓冲区放入语音识别引擎会以某种方式阻止进一步执行。我想可用的缓冲区会在一段时间后被消耗,并且该过程会以某种方式停止,因为它无法再获取缓冲区???
我应该提到在前2秒内记录的所有内容都会导致正确的识别。我只是不知道SFSpeech API是如何工作的,因为Apple没有将任何文本放入beta文档中。顺便说一句:如何使用SFSpeechAudioBufferRecognitionRequest.endAudio()?
有人知道吗?
由于 克里斯
答案 0 :(得分:17)
我将SpeakToMe示例Swift代码从语音识别WWDC开发人员谈话转换为Objective-C,它对我有用。对于Swift,请参阅https://developer.apple.com/videos/play/wwdc2016/509/或Objective-C,请参阅下文。
- (void) viewDidAppear:(BOOL)animated {
_recognizer = [[SFSpeechRecognizer alloc] initWithLocale:[NSLocale localeWithLocaleIdentifier:@"en-US"]];
[_recognizer setDelegate:self];
[SFSpeechRecognizer requestAuthorization:^(SFSpeechRecognizerAuthorizationStatus authStatus) {
switch (authStatus) {
case SFSpeechRecognizerAuthorizationStatusAuthorized:
//User gave access to speech recognition
NSLog(@"Authorized");
break;
case SFSpeechRecognizerAuthorizationStatusDenied:
//User denied access to speech recognition
NSLog(@"SFSpeechRecognizerAuthorizationStatusDenied");
break;
case SFSpeechRecognizerAuthorizationStatusRestricted:
//Speech recognition restricted on this device
NSLog(@"SFSpeechRecognizerAuthorizationStatusRestricted");
break;
case SFSpeechRecognizerAuthorizationStatusNotDetermined:
//Speech recognition not yet authorized
break;
default:
NSLog(@"Default");
break;
}
}];
audioEngine = [[AVAudioEngine alloc] init];
_speechSynthesizer = [[AVSpeechSynthesizer alloc] init];
[_speechSynthesizer setDelegate:self];
}
-(void)startRecording
{
[self clearLogs:nil];
NSError * outError;
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryRecord error:&outError];
[audioSession setMode:AVAudioSessionModeMeasurement error:&outError];
[audioSession setActive:true withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&outError];
request2 = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
inputNode = [audioEngine inputNode];
if (request2 == nil) {
NSLog(@"Unable to created a SFSpeechAudioBufferRecognitionRequest object");
}
if (inputNode == nil) {
NSLog(@"Unable to created a inputNode object");
}
request2.shouldReportPartialResults = true;
_currentTask = [_recognizer recognitionTaskWithRequest:request2
delegate:self];
[inputNode installTapOnBus:0 bufferSize:4096 format:[inputNode outputFormatForBus:0] block:^(AVAudioPCMBuffer *buffer, AVAudioTime *when){
NSLog(@"Block tap!");
[request2 appendAudioPCMBuffer:buffer];
}];
[audioEngine prepare];
[audioEngine startAndReturnError:&outError];
NSLog(@"Error %@", outError);
}
- (void)speechRecognitionTask:(SFSpeechRecognitionTask *)task didFinishRecognition:(SFSpeechRecognitionResult *)result {
NSLog(@"speechRecognitionTask:(SFSpeechRecognitionTask *)task didFinishRecognition");
NSString * translatedString = [[[result bestTranscription] formattedString] stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]];
[self log:translatedString];
if ([result isFinal]) {
[audioEngine stop];
[inputNode removeTapOnBus:0];
_currentTask = nil;
request2 = nil;
}
}
答案 1 :(得分:13)
我成功连续使用SFSpeechRecognizer。 重点是使用 AVCaptureSession 来捕获音频并传输到SpeechRecognizer。 对不起,我在Swift很穷,所以只是ObjC版本。
这是我的示例代码(省略了一些UI代码,一些重要的代码已标记):
@interface ViewController ()<AVCaptureAudioDataOutputSampleBufferDelegate,SFSpeechRecognitionTaskDelegate>
@property (nonatomic, strong) AVCaptureSession *capture;
@property (nonatomic, strong) SFSpeechAudioBufferRecognitionRequest *speechRequest;
@end
@implementation ViewController
- (void)startRecognizer
{
[SFSpeechRecognizer requestAuthorization:^(SFSpeechRecognizerAuthorizationStatus status) {
if (status == SFSpeechRecognizerAuthorizationStatusAuthorized){
NSLocale *local =[[NSLocale alloc] initWithLocaleIdentifier:@"fr_FR"];
SFSpeechRecognizer *sf =[[SFSpeechRecognizer alloc] initWithLocale:local];
self.speechRequest = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
[sf recognitionTaskWithRequest:self.speechRequest delegate:self];
// should call startCapture method in main queue or it may crash
dispatch_async(dispatch_get_main_queue(), ^{
[self startCapture];
});
}
}];
}
- (void)endRecognizer
{
// END capture and END voice Reco
// or Apple will terminate this task after 30000ms.
[self endCapture];
[self.speechRequest endAudio];
}
- (void)startCapture
{
NSError *error;
self.capture = [[AVCaptureSession alloc] init];
AVCaptureDevice *audioDev = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
if (audioDev == nil){
NSLog(@"Couldn't create audio capture device");
return ;
}
// create mic device
AVCaptureDeviceInput *audioIn = [AVCaptureDeviceInput deviceInputWithDevice:audioDev error:&error];
if (error != nil){
NSLog(@"Couldn't create audio input");
return ;
}
// add mic device in capture object
if ([self.capture canAddInput:audioIn] == NO){
NSLog(@"Couldn't add audio input");
return ;
}
[self.capture addInput:audioIn];
// export audio data
AVCaptureAudioDataOutput *audioOutput = [[AVCaptureAudioDataOutput alloc] init];
[audioOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
if ([self.capture canAddOutput:audioOutput] == NO){
NSLog(@"Couldn't add audio output");
return ;
}
[self.capture addOutput:audioOutput];
[audioOutput connectionWithMediaType:AVMediaTypeAudio];
[self.capture startRunning];
}
-(void)endCapture
{
if (self.capture != nil && [self.capture isRunning]){
[self.capture stopRunning];
}
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
[self.speechRequest appendAudioSampleBuffer:sampleBuffer];
}
// some Recognition Delegate
@end
答案 2 :(得分:9)
这是@ cube的答案的Swift(3.0)实现:
import UIKit
import Speech
import AVFoundation
class ViewController: UIViewController {
@IBOutlet weak var console: UITextView!
var capture: AVCaptureSession?
var speechRequest: SFSpeechAudioBufferRecognitionRequest?
override func viewDidLoad() {
super.viewDidLoad()
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
startRecognizer()
}
func startRecognizer() {
SFSpeechRecognizer.requestAuthorization { (status) in
switch status {
case .authorized:
let locale = NSLocale(localeIdentifier: "fr_FR")
let sf = SFSpeechRecognizer(locale: locale as Locale)
self.speechRequest = SFSpeechAudioBufferRecognitionRequest()
sf?.recognitionTask(with: self.speechRequest!, delegate: self)
DispatchQueue.main.async {
}
case .denied:
fallthrough
case .notDetermined:
fallthrough
case.restricted:
print("User Autorization Issue.")
}
}
}
func endRecognizer() {
endCapture()
speechRequest?.endAudio()
}
func startCapture() {
capture = AVCaptureSession()
guard let audioDev = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeAudio) else {
print("Could not get capture device.")
return
}
guard let audioIn = try? AVCaptureDeviceInput(device: audioDev) else {
print("Could not create input device.")
return
}
guard true == capture?.canAddInput(audioIn) else {
print("Couls not add input device")
return
}
capture?.addInput(audioIn)
let audioOut = AVCaptureAudioDataOutput()
audioOut.setSampleBufferDelegate(self, queue: DispatchQueue.main)
guard true == capture?.canAddOutput(audioOut) else {
print("Could not add audio output")
return
}
capture?.addOutput(audioOut)
audioOut.connection(withMediaType: AVMediaTypeAudio)
capture?.startRunning()
}
func endCapture() {
if true == capture?.isRunning {
capture?.stopRunning()
}
}
}
extension ViewController: AVCaptureAudioDataOutputSampleBufferDelegate {
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
speechRequest?.appendAudioSampleBuffer(sampleBuffer)
}
}
extension ViewController: SFSpeechRecognitionTaskDelegate {
func speechRecognitionTask(_ task: SFSpeechRecognitionTask, didFinishRecognition recognitionResult: SFSpeechRecognitionResult) {
console.text = console.text + "\n" + recognitionResult.bestTranscription.formattedString
}
}
不要忘记在NSSpeechRecognitionUsageDescription
文件中添加 info.plist
的值,否则会崩溃。
答案 3 :(得分:6)
事实证明,Apple的新原生语音识别不会自动检测到语音结束时的静音(一个错误?),这对您的情况很有用,因为语音识别有效近一分钟(最大值)期间,Apple的服务允许)。 所以基本上如果你需要连续的ASR,你必须在你的委托触发时重新启动语音识别:
func speechRecognitionTask(task: SFSpeechRecognitionTask, didFinishSuccessfully successfully: Bool) //wether succesfully= true or not
这是我使用的录音/语音识别SWIFT代码,它完美运行。如果您不需要,请忽略我计算麦克风音量平均功率的部分。我用它来设置波形动画。不要忘记设置SFSpeechRecognitionTaskDelegate,并且是委托方法,如果您需要额外的代码,请告诉我。
func startNativeRecording() throws {
LEVEL_LOWPASS_TRIG=0.01
//Setup Audio Session
node = audioEngine.inputNode!
let recordingFormat = node!.outputFormatForBus(0)
node!.installTapOnBus(0, bufferSize: 1024, format: recordingFormat){(buffer, _) in
self.nativeASRRequest.appendAudioPCMBuffer(buffer)
//Code to animate a waveform with the microphone volume, ignore if you don't need it:
var inNumberFrames:UInt32 = buffer.frameLength;
var samples:Float32 = buffer.floatChannelData[0][0]; //https://github.com/apple/swift-evolution/blob/master/proposals/0107-unsaferawpointer.md
var avgValue:Float32 = 0;
vDSP_maxmgv(buffer.floatChannelData[0], 1, &avgValue, vDSP_Length(inNumberFrames)); //Accelerate Framework
//vDSP_maxmgv returns peak values
//vDSP_meamgv returns mean magnitude of a vector
let avg3:Float32=((avgValue == 0) ? (0-100) : 20.0)
var averagePower=(self.LEVEL_LOWPASS_TRIG*avg3*log10f(avgValue)) + ((1-self.LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel0) ;
print("AVG. POWER: "+averagePower.description)
dispatch_async(dispatch_get_main_queue(), { () -> Void in
//print("VU: "+vu.description)
var fAvgPwr=CGFloat(averagePower)
print("AvgPwr: "+fAvgPwr.description)
var waveformFriendlyValue=0.5+fAvgPwr //-0.5 is AvgPwrValue when user is silent
if(waveformFriendlyValue<0){waveformFriendlyValue=0} //round values <0 to 0
self.waveview.hidden=false
self.waveview.updateWithLevel(waveformFriendlyValue)
})
}
audioEngine.prepare()
try audioEngine.start()
isNativeASRBusy=true
nativeASRTask = nativeSpeechRecognizer?.recognitionTaskWithRequest(nativeASRRequest, delegate: self)
nativeSpeechRecognizer?.delegate=self
//I use this timer to track no speech timeouts, ignore if not neeeded:
self.endOfSpeechTimeoutTimer = NSTimer.scheduledTimerWithTimeInterval(utteranceTimeoutSeconds, target: self, selector: #selector(ViewController.stopNativeRecording), userInfo: nil, repeats: false)
}
答案 4 :(得分:0)
这在我的应用程序中非常有效。 您可以通过saifurrahman3126@gmail.com来查询 Apple不允许用户连续翻译超过一分钟。 https://developer.apple.com/documentation/speech/sfspeechrecognizer check here
“计划将音频持续时间限制为一分钟。语音识别会给电池寿命和网络使用带来相对较高的负担。为了最大程度地减少这种负担,该框架将停止持续一分钟以上的语音识别任务。类似于与键盘相关的命令。” 这就是苹果在其文档中所说的话。
现在,我发出了40秒钟的请求,如果您在40秒钟之前讲话然后暂停,我将重新连接它,录音将再次开始。
@objc func startRecording() {
self.fullsTring = ""
audioEngine.reset()
if recognitionTask != nil {
recognitionTask?.cancel()
recognitionTask = nil
}
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(.record)
try audioSession.setMode(.measurement)
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
try audioSession.setPreferredSampleRate(44100.0)
if audioSession.isInputGainSettable {
let error : NSErrorPointer = nil
let success = try? audioSession.setInputGain(1.0)
guard success != nil else {
print ("audio error")
return
}
if (success != nil) {
print("\(String(describing: error))")
}
}
else {
print("Cannot set input gain")
}
} catch {
print("audioSession properties weren't set because of an error.")
}
recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
let inputNode = audioEngine.inputNode
guard let recognitionRequest = recognitionRequest else {
fatalError("Unable to create an SFSpeechAudioBufferRecognitionRequest object")
}
recognitionRequest.shouldReportPartialResults = true
self.timer4 = Timer.scheduledTimer(timeInterval: TimeInterval(40), target: self, selector: #selector(againStartRec), userInfo: nil, repeats: false)
recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest, resultHandler: { (result, error ) in
var isFinal = false //8
if result != nil {
self.timer.invalidate()
self.timer = Timer.scheduledTimer(timeInterval: TimeInterval(2.0), target: self, selector: #selector(self.didFinishTalk), userInfo: nil, repeats: false)
let bestString = result?.bestTranscription.formattedString
self.fullsTring = bestString!
self.inputContainerView.inputTextField.text = result?.bestTranscription.formattedString
isFinal = result!.isFinal
}
if error == nil{
}
if isFinal {
self.audioEngine.stop()
inputNode.removeTap(onBus: 0)
self.recognitionRequest = nil
self.recognitionTask = nil
isFinal = false
}
if error != nil{
URLCache.shared.removeAllCachedResponses()
self.audioEngine.stop()
inputNode.removeTap(onBus: 0)
guard let task = self.recognitionTask else {
return
}
task.cancel()
task.finish()
}
})
audioEngine.reset()
inputNode.removeTap(onBus: 0)
let recordingFormat = AVAudioFormat(standardFormatWithSampleRate: 44100, channels: 1)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
self.recognitionRequest?.append(buffer)
}
audioEngine.prepare()
do {
try audioEngine.start()
} catch {
print("audioEngine couldn't start because of an error.")
}
self.hasrecorded = true
}
@objc func againStartRec(){
self.inputContainerView.uploadImageView.setBackgroundImage( #imageLiteral(resourceName: "microphone") , for: .normal)
self.inputContainerView.uploadImageView.alpha = 1.0
self.timer4.invalidate()
timer.invalidate()
self.timer.invalidate()
if ((self.audioEngine.isRunning)){
self.audioEngine.stop()
self.recognitionRequest?.endAudio()
self.recognitionTask?.finish()
}
self.timer2 = Timer.scheduledTimer(timeInterval: 2, target: self, selector: #selector(startRecording), userInfo: nil, repeats: false)
}
@objc func didFinishTalk(){
if self.fullsTring != ""{
self.timer4.invalidate()
self.timer.invalidate()
self.timer2.invalidate()
if ((self.audioEngine.isRunning)){
self.audioEngine.stop()
guard let task = self.recognitionTask else {
return
}
task.cancel()
task.finish()
}
}
}
答案 5 :(得分:0)
如果您启用仅在设备上的识别,它不会在 1 分钟后自动停止语音识别。
.requiresOnDeviceRecognition = true
更多关于 requiresOnDeviceRecognition ;