以下代码可用于JAVA:
try (SpeechClient speech = SpeechClient.create()) {
Path path = Paths.get(fileName);
byte[] data = Files.readAllBytes(path);
ByteString audioBytes = ByteString.copyFrom(data);
// Configure request with local raw PCM audio
RecognitionConfig config =
RecognitionConfig.newBuilder()
.setEncoding(AudioEncoding.LINEAR16)
.setLanguageCode("en-US")
.setSampleRateHertz(16000)
.setEnableWordTimeOffsets(true)
.build();
RecognitionAudio audio = RecognitionAudio.newBuilder().setContent(audioBytes).build();
// Use blocking call to get audio transcript
RecognizeResponse response = speech.recognize(config, audio);
List<SpeechRecognitionResult> results = response.getResultsList();
for (SpeechRecognitionResult result : results) {
......
}
但是当我尝试在Android上使用时,recoginze调用再也不会返回。所有依赖关系都可以,没有冲突,没有错误。
在Android中,我这样做:
FixedCredentialsProvider credentialsProvider = FixedCredentialsProvider.create(credential);
SpeechSettings speechSettings = SpeechSettings.newBuilder()
.setCredentialsProvider(credentialsProvider).build();
mApi = SpeechClient.create(speechSettings); byte[] content = this.readAllbytesFrom(fileName);
RecognitionAudio recognitionAudio =
RecognitionAudio.newBuilder().setContent(ByteString.copyFrom(content)).build();
RecognitionConfig config =
RecognitionConfig.newBuilder()
.setEncoding(RecognitionConfig.AudioEncoding.OGG_OPUS)
.setSampleRateHertz(SpeechConfiguration.SAMPLE_RATE)
.setLanguageCode(getDefaultLanguageCode())
.build();
// Perform the transcription request
RecognizeResponse response = mApi.recognize(config, recognitionAudio);
没有错误,没有回应。我已经尝试了异步调用,但仍然无法正常工作
OperationFuture<LongRunningRecognizeResponse, LongRunningRecognizeMetadata> response =
mApi.longRunningRecognizeAsync(config, recognitionAudio);
while (!response.isDone()) {
System.out.println("Waiting for response...");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}