iPhone 5上的语音识别

时间:2017-07-12 12:50:00

标签: ios objective-c speech-recognition speech-to-text ios10.2

我正在使用Objective-C iOS应用中的iOS语音识别API。 它适用于iPhone 6,7,但不适用于iPhone 5(iOS,10.2.1)。

另请注意,它适用于iPhone 5s,而不适用于iPhone 5.

iOS语音API是否可以在iPhone 5上运行?您是否必须做任何不同的事情才能让它发挥作用,或者有人知道问题是什么吗?

基本代码如下。没有错误发生,并且检测到麦克风音量,但没有检测到语音。

if (audioEngine != NULL) {
        [audioEngine stop];
        [speechTask cancel];
        AVAudioInputNode* inputNode = [audioEngine inputNode];
        [inputNode removeTapOnBus: 0];
    }

    recording = YES;
    micButton.selected = YES;

    //NSLog(@"Starting recording...   SFSpeechRecognizer Available? %d", [speechRecognizer isAvailable]);
    NSError * outError;
    //NSLog(@"AUDIO SESSION CATEGORY0: %@", [[AVAudioSession sharedInstance] category]);
    AVAudioSession* audioSession = [AVAudioSession sharedInstance];
    [audioSession setCategory: AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker error:&outError];
    [audioSession setMode: AVAudioSessionModeMeasurement error:&outError];
    [audioSession setActive: true withOptions: AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&outError];

    SFSpeechAudioBufferRecognitionRequest* speechRequest = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
    //NSLog(@"AUDIO SESSION CATEGORY1: %@", [[AVAudioSession sharedInstance] category]);
    if (speechRequest == nil) {
        NSLog(@"Unable to create SFSpeechAudioBufferRecognitionRequest.");
        return;
    }

    speechDetectionSamples = 0;

    // This some how fixes a crash on iPhone 7
    // Seems like a bug in iOS ARC/lack of gc
    AVAudioEngine* temp = audioEngine;
    audioEngine = [[AVAudioEngine alloc] init];
    AVAudioInputNode* inputNode = [audioEngine inputNode];

    speechRequest.shouldReportPartialResults = true;

    // iOS speech does not detect end of speech, so must track silence.
    lastSpeechDetected = -1;

    speechTask = [speechRecognizer recognitionTaskWithRequest: speechRequest delegate: self];

    [inputNode installTapOnBus:0 bufferSize: 4096 format: [inputNode outputFormatForBus:0] block:^(AVAudioPCMBuffer* buffer, AVAudioTime* when) {
        @try {
            long millis = [[NSDate date] timeIntervalSince1970] * 1000;
            if (lastSpeechDetected != -1 && ((millis - lastSpeechDetected) > 1000)) {
                lastSpeechDetected = -1;
                [speechTask finish];
                return;
            }
            [speechRequest appendAudioPCMBuffer: buffer];

            //Calculate volume level
            if ([buffer floatChannelData] != nil) {
                float volume = fabsf(*buffer.floatChannelData[0]);

                if (volume >= speechDetectionThreshold) {
                    speechDetectionSamples++;

                    if (speechDetectionSamples >= speechDetectionSamplesNeeded) {

                        //Need to change mic button image in main thread
                        [[NSOperationQueue mainQueue] addOperationWithBlock:^ {

                            [micButton setImage: [UIImage imageNamed: @"micRecording"] forState: UIControlStateSelected];

                        }];
                    }
                } else {
                    speechDetectionSamples = 0;
                }
            }
        }
        @catch (NSException * e) {
            NSLog(@"Exception: %@", e);
        }
    }];

    [audioEngine prepare];
    [audioEngine startAndReturnError: &outError];
    NSLog(@"Error %@", outError);

1 个答案:

答案 0 :(得分:2)

我认为此代码中存在错误:

long millis = [[NSDate date] timeIntervalSince1970] * 1000;

32位设备(iPhone 5是32位设备),最多可以保存2 ^ 32-1,即2,147,483,647。

我在iPhone 5模拟器上查看过,毫秒是负值。在您发布的代码段中,没有提到在最初将其设置为-1后如何设置lastSpeechDetected,但如果某种方式((millis - lastSpeechDetected) > 1000)为真,则它将进入if-block并且完成演讲任务。