使用SFSpeechRecognizer的正确方法是什么?

时间:2017-01-13 10:59:06

标签: ios objective-c speech-recognition sfspeechrecognizer

我试图使用SFSpeechRecognizer,但我没有办法测试我是否正确实施它,而且因为它是一个相对较新的类,我无法使用[SFSpeechRecognizer requestAuthorization:^(SFSpeechRecognizerAuthorizationStatus status){ if (status == SFSpeechRecognizerAuthorizationStatusAuthorized) { SFSpeechRecognizer* recognizer = [[SFSpeechRecognizer alloc] init]; recognizer.delegate = self; SFSpeechAudioBufferRecognitionRequest* request = [[SFSpeechAudioBufferRecognitionRequest alloc] init]; request.contextualStrings = @[@"data", @"bank", @"databank"]; SFSpeechRecognitionTask* task = [recognizer recognitionTaskWithRequest:request resultHandler:^(SFSpeechRecognitionResult* result, NSError* error){ SFTranscription* transcript = result.bestTranscription; NSLog(@"%@", transcript); }]; } }]; 。找一个示例代码(我不知道swift)。我是否犯了任何不可原谅的错误/遗失了什么?

 demo.xjb
 ================
 <jxb:bindings 
 xmlns:xs="http://www.w3.org/2001/XMLSchema"
 xmlns:jxb="http://java.sun.com/xml/ns/jaxb"
 version="2.1">
 <jxb:bindings schemaLocation="demo.xsd">
        <jxb:bindings node="//xs:complexType[@name='itemType']">
            <jxb:factoryMethod  name="Item"/>
        </jxb:bindings>
 </jxb:bindings>
 </jxb:bindings>


demo.xsd
===============
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
    xmlns:jaxb="http://java.sun.com/xml/ns/jaxb"
    jaxb:version="2.1">
 <xs:complexType name="itemType">
    <xs:annotation>
        <xs:appinfo>
            <jaxb:class name="Item"/>
        </xs:appinfo>
    </xs:annotation>
    <xs:attribute name="id" type="xs:string" use="required"/>
 </xs:complexType>
 </xs:schema>

1 个答案:

答案 0 :(得分:1)

我也在尝试,但是这个代码对我有用,毕竟SFSpeechRecognizer和SFSpeechAudioBufferRecognitionRequest不一样,所以我认为(没有测试过)你必须要求不同的权限(你之前是否要求权限?)使用麦克风和speechRecognition?)。好的,这是代码:

//Available over iOS 10, only for maximum 1 minute, need internet connection; can be sourced from an audio recorded file or over the microphone

    NSLocale *local =[[NSLocale alloc] initWithLocaleIdentifier:@"es-MX"];
        speechRecognizer = [[SFSpeechRecognizer alloc] initWithLocale:local];
        NSString *soundFilePath = [myDir stringByAppendingPathComponent:@"/sound.m4a"];
        NSURL *url = [[NSURL alloc] initFileURLWithPath:soundFilePath];
        if(!speechRecognizer.isAvailable)
            NSLog(@"speechRecognizer is not available, maybe it has no internet connection");
        SFSpeechURLRecognitionRequest *urlRequest = [[SFSpeechURLRecognitionRequest alloc] initWithURL:url];
        urlRequest.shouldReportPartialResults = YES; // YES if animate writting
        [speechRecognizer recognitionTaskWithRequest: urlRequest resultHandler:  ^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error)
        {
    NSString *transcriptText = result.bestTranscription.formattedString;
            if(!error)
            {
        NSLog(@"transcriptText");
            }
        }];