为什么我的SFSpeechRecognitionTaskDelegate功能没有在ios中被调用?

时间:2018-03-23 09:31:46

标签: ios objective-c delegates sfspeechrecognizer

所以我有一些委托功能,我已经通过

设置了委托
[speechRecognizer recognitionTaskWithRequest:recognitionRequest delegate:self];

但是我的委托函数没有被调用

我的录音功能有问题吗?

- (void) startRecording 
{

 audioEngine = [[AVAudioEngine alloc] init];

    if (recognitionTask) {
        NSLog(@"recognition task already running");
        [recognitionTask cancel];
        recognitionTask = nil;
    }

    NSError *error;
    audioSession = [AVAudioSession sharedInstance];
    [audioSession setCategory:AVAudioSessionCategoryRecord error:&error];
    [audioSession setMode:AVAudioSessionModeMeasurement error:&error];
    [audioSession setActive:true withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&error];

    inputNode = [self getInputNode];
    [speechRecognizer recognitionTaskWithRequest:recognitionRequest delegate:self];
    recognitionRequest.shouldReportPartialResults = YES;

    recognitionTask = [speechRecognizer recognitionTaskWithRequest:recognitionRequest resultHandler:^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error) {
        BOOL isFinal = NO;
        if (result) {
            NSLog(@"RESULT - %@",result.bestTranscription.formattedString);
            isFinal = !result.isFinal;
        }
        if (error) {
            NSLog(@"errror = %@", error.localizedDescription );
        }
    }];
    AVAudioFormat *recordingFormat = [inputNode outputFormatForBus:0];
    [inputNode installTapOnBus:0 bufferSize:1024 format:recordingFormat block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
        [recognitionRequest appendAudioPCMBuffer:buffer];
    }];

    // Starts the audio engine, i.e. it starts listening.
    [audioEngine prepare];
    [audioEngine startAndReturnError:&error];
    NSLog(@"Say Something, I'm listening new");
}

我的委托函数是

- (void)speechRecognizer:(SFSpeechRecognizer *)speechRecognizer availabilityDidChange:(BOOL)available {
    NSLog(@"Availability:%d",available);
}
- (void) speechRecognitionDidDetectSpeech:(SFSpeechRecognitionTask *)task
{
    NSLog(@"State: %ld ",(long)task.state);
}
-(void) speechRecognitionTaskFinishedReadingAudio:(SFSpeechRecognitionTask *)task
{
    NSLog(@"State  : %ld ",task.state);
}

-(void) speechRecognitionTask:(SFSpeechRecognitionTask *)task didHypothesizeTranscription:(SFTranscription *)transcription
{
    recognizedText = transcription.formattedString;
    [self stopNoAudioDurationTimer];
    [self startNoAudioDurationTimer];
    NSLog(@"State: %@ ",transcription.segments );
    NSLog(@"State: %@ ",transcription.segments.lastObject );
    NSLog(@"State: %@ ",transcription.segments.lastObject );
    NSLog(@"State: %@ ",transcription.segments.lastObject );
}
//func speechRecognitionDidDetectSpeech(SFSpeechRecognitionTask)
-(void) speechRecognitionTask:(SFSpeechRecognitionTask *)task didFinishRecognition:(SFSpeechRecognitionResult *)recognitionResult
{
    recognizedText = recognitionResult.bestTranscription.formattedString;
}

-(void) speechRecognitionTask:(SFSpeechRecognitionTask *)task didFinishSuccessfully:(BOOL)successfully
{
    [self stopNoAudioDurationTimer];
}

1 个答案:

答案 0 :(得分:0)

您似乎正在创建两个不同的任务:

// first one here
[speechRecognizer recognitionTaskWithRequest:recognitionRequest delegate:self];

//second one here
recognitionTask = [speechRecognizer recognitionTaskWithRequest:recognitionRequest resultHandler:^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error) {
    BOOL isFinal = NO;
    if (result) {
        NSLog(@"RESULT - %@",result.bestTranscription.formattedString);
        isFinal = !result.isFinal;
    }
    if (error) {
        NSLog(@"errror = %@", error.localizedDescription );
    }
}];

我认为您应该只保留第一个声明,否则您的委托方法将不会被调用。