代码 A 来自文章 https://cloud.google.com/speech-to-text/docs/async-recognize
它是用Java写的,我觉得下面的代码不是很好的代码,它使应用程序中断。
while (!response.isDone()) {
System.out.println("Waiting for response...");
Thread.sleep(10000);
}
...
我是 Kotlin 的初学者。如何使用 Kotlin 编写更好的代码?也许使用协程?
代码 A
public static void asyncRecognizeGcs(String gcsUri) throws Exception {
// Instantiates a client with GOOGLE_APPLICATION_CREDENTIALS
try (SpeechClient speech = SpeechClient.create()) {
// Configure remote file request for FLAC
RecognitionConfig config =
RecognitionConfig.newBuilder()
.setEncoding(AudioEncoding.FLAC)
.setLanguageCode("en-US")
.setSampleRateHertz(16000)
.build();
RecognitionAudio audio = RecognitionAudio.newBuilder().setUri(gcsUri).build();
// Use non-blocking call for getting file transcription
OperationFuture<LongRunningRecognizeResponse, LongRunningRecognizeMetadata> response =
speech.longRunningRecognizeAsync(config, audio);
while (!response.isDone()) {
System.out.println("Waiting for response...");
Thread.sleep(10000);
}
List<SpeechRecognitionResult> results = response.get().getResultsList();
for (SpeechRecognitionResult result : results) {
// There can be several alternative transcripts for a given chunk of speech. Just use the
// first (most likely) one here.
SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
System.out.printf("Transcription: %s\n", alternative.getTranscript());
}
}
}
答案 0 :(得分:0)
您必须提供一些上下文来了解您要实现的目标,但看起来协程在这里并不是真正必要的,因为 longRunningRecognizeAsync
已经是非阻塞的并返回 OperationFuture 响应对象。您只需要决定如何处理该响应,或者只是存储 Future 并稍后检查。 while (!response.isDone()) {}
没有任何隐含的错误,这就是 Java Futures 应该如何工作的。还要检查OperationFuture,如果它是正常的Java Future,它应该实现get()
方法,这将使您在必要时等待结果,而不必执行显式Thread.sleep()。