可以在Spark Streaming
处理过程中从CompletableFuture's
开始进行流传输,例如,我们有3个CompletableFuture
作为波纹管:
final CompletableFuture<String> CF_A =
CompletableFuture.supplyAsync(() -> {method_A(); return "A"; });
final CompletableFuture<String> CF_B =
CompletableFuture.supplyAsync(() -> {method_B(); return "B"; });
final CompletableFuture<String> CF_C =
CompletableFuture.supplyAsync(() -> {method_C(); return "C"; });
CompletableFuture
.supplyAsync(() -> Stream.of( CF_A, CF_B, CF_C )
.parallel()
.map(future -> future.join()).collect(Collectors.toList()), executor));
假设方法:method_A(), method_B(), method_C()
对文件夹中的某些数据进行过滤,然后每10秒保存一次,
是否可以对此Streaming
进行CompletableFuture parallel process
?