谷歌云语音API响应:解析iOS

时间:2017-02-06 07:07:15

标签: ios google-cloud-speech

我正在尝试将谷歌云语音API集成到我的演示应用中。 我得到的结果如下:

    {
    results {
      alternatives {
        transcript: "hello"
      }
      stability: 0.01
    }
}

获得回复的代码:

[[SpeechRecognitionService sharedInstance] streamAudioData:self.audioData
                                                withCompletion:^(StreamingRecognizeResponse *response, NSError *error) {
                                                  if (error) {
                                                    NSLog(@"ERROR: %@", error);
                                                    _textView.text = [error localizedDescription];
                                                    [self stopAudio:nil];
                                                  } else if (response) {
                                                    BOOL finished = NO;
                                                    //NSLog(@"RESPONSE: %@", response.resultsArray);
                                                    for (StreamingRecognitionResult *result in response.resultsArray) {
                                                        NSLog(@"result : %@",result);
                                                        //_textView.text = result.alternatives.transcript;
                                                      if (result.isFinal) {
                                                        finished = YES;
                                                      }
                                                    }

                                                    if (finished) {
                                                      [self stopAudio:nil];
                                                    }
                                                  }
                                                }
     ];

我的问题是,我得到的回复不是一个合适的JSON,那么我如何获得密钥transcript的值?任何帮助,将不胜感激。感谢。

2 个答案:

答案 0 :(得分:1)

对于正在寻找此问题的解决方案的人:

for (StreamingRecognitionResult *result in response.resultsArray) {
                                                      for (StreamingRecognitionResult *alternative in result.alternativesArray) {
                                                        _textView.text = [NSString stringWithFormat:@"%@",[alternative valueForKey:@"transcript"]];
                                                      }
                                                      if (result.isFinal) {
                                                        finished = YES;
                                                      }
                                                    }

这就是我为了transcript持续获得价值而采取的措施。

答案 1 :(得分:0)

这里的代码将解决你在Swift4 / iOS11.2.5上的问题,享受!:

SpeechRecognitionService.sharedInstance.streamAudioData(audioData, completion:
                { [weak self] (response, error) in
                    guard let strongSelf = self else {
                        return
                    }
                    if let error = error {
                        print("*** Streaming ASR ERROR: "+error.localizedDescription)
                    } else if let response = response {

                        for result in response.resultsArray {
                            print("result i: ")  //log to console
                            print(result)
                            if let alternative = result as? StreamingRecognitionResult {
                                for a in alternative.alternativesArray{
                                    if let ai = a as? SpeechRecognitionAlternative{
                                        print("alternative i: ")  //log to console
                                        print(ai)
                                        if(alternative.isFinal){
                                            print("*** FINAL ASR result: "+ai.transcript)
                                            strongSelf.stopGoogleStreamingASR(strongSelf)
                                        }
                                        else{
                                            print("*** PARTIAL ASR result: "+ai.transcript)
                                        }
                                    }
                                }

                            }
                            else{
                                print("ERROR: let alternative = result as? StreamingRecognitionResult")
                            }
                        }
                    }
            })