实现SFSpeechAudioBufferRecognitionRequest域= kAFAssistantErrorDomain代码= 216时出错

时间:2018-09-20 09:47:20

标签: ios objective-c speech-recognition speech-to-text

使用Objective-C语言实现SFSpeechAudioBufferRecognitionRequest时出现错误。这是我的代码..它已经在一天之前工作了。错误是Domain = kAFAssistantErrorDomain代码= 216“(null)”

- (void)startListening {

// Initialize the AVAudioEngine
audioEngine = [[AVAudioEngine alloc] init];

// Make sure there's not a recognition task already running
if (recognitionTask) {
    [recognitionTask cancel];
    recognitionTask = nil;
}

// Starts an AVAudio Session
NSError *error;
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
[audioSession setActive:YES withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&error];

// Starts a recognition process, in the block it logs the input or stops the audio
// process if there's an error.
recognitionRequest = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
 inputNode = audioEngine.inputNode;
recognitionRequest.shouldReportPartialResults = NO;
recognitionRequest.taskHint = SFSpeechRecognitionTaskHintDictation;
[self startWaveAudio];

// Sets the recording format
AVAudioFormat *recordingFormat = [inputNode outputFormatForBus:0];
[inputNode installTapOnBus:0 bufferSize:4096 format:recordingFormat block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
    [recognitionRequest appendAudioPCMBuffer:buffer];
}];

// Starts the audio engine, i.e. it starts listening.
[audioEngine prepare];
[audioEngine startAndReturnError:&error];


__block BOOL isFinal = NO;

recognitionTask = [speechRecognizer recognitionTaskWithRequest:recognitionRequest resultHandler:^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error) {

    [self stopWaveAudio];

    if (result) {
        // Whatever you say in the microphone after pressing the button should be being logged
        // in the console.
        NSLog(@"RESULT:%@",result.bestTranscription.formattedString);

        for (SFTranscription *tra in result.transcriptions) {
            NSLog(@"Multiple Results : %@", tra.formattedString);
        }

        if(isFinal == NO) {
            [self calculateResultOfSpeechWithResultString:result.bestTranscription.formattedString];
        }
        isFinal = !result.isFinal;
    }
    if (error || isFinal) {
        NSLog(@"Error Description : %@", error);
        [self stopRecording];

    }
}];
}

- (IBAction)tap2TlkBtnPrsd:(UIButton *)sender {
  if (audioEngine.isRunning) {
   [self stopRecording];
} else {
    [self startListening];
}

isMicOn = !isMicOn;
micPrompt = NO;

}

-(void)stopRecording {

       // dispatch_async(dispatch_get_main_queue(), ^{

            if(audioEngine.isRunning){
                [inputNode removeTapOnBus:0];
                [inputNode reset];
                [audioEngine stop];
                [recognitionRequest endAudio];
                [recognitionTask cancel];
                recognitionTask = nil;
                recognitionRequest = nil;
            }
       // });


}

并且正在尝试其他方式,例如在请求语音之后添加音频缓冲区。

如果可以的话,有人可以告诉我,如何实现一种场景,例如用户将拼写单词,结果将仅是单词?

1 个答案:

答案 0 :(得分:1)

取消识别任务时,我有相同的Error = 216。 isFinal的{​​{1}}属性仅在识别器认为说话者已经结束时才为真。因此,当您第一次尝试使用SFSpeechRecognitionResult时,它是isFinal = !result.isFinal;,并且您的False标志调用isFinal所在的块,并用stopRecording()取消了它。

因此,如果您只想先转录(单词),则可以调用[recognitionTask cancel];的第一段的substring属性,然后再调用bestTranscription

[recognitionTask finish];