数据流上的grpc StatusRuntimeException

时间:2019-07-18 13:27:23

标签: google-cloud-platform apache-beam google-cloud-pubsub dataflow beam

我有一个数据流管道,其中使用pubsub消息,对其进行处理,然后发布到pubsub。

每当我进行过多的计算(即增加每条消息的处理量)时,都会收到异常。 : java.util.concurrent.ExecutionException:org.apache.beam.vendor.grpc.v1p13p1.io.grpc.StatusRuntimeException:CANCELLED:在收到一半关闭前取消

什么原因导致此错误?我该如何避免呢? 完整的堆栈跟踪:

org.apache.beam.vendor.grpc.v1p13p1.io.grpc.StatusRuntimeException: CANCELLED: call already cancelled
        org.apache.beam.vendor.grpc.v1p13p1.io.grpc.Status.asRuntimeException(Status.java:517)
        org.apache.beam.vendor.grpc.v1p13p1.io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl.onNext(ServerCalls.java:335)
        org.apache.beam.sdk.fn.stream.DirectStreamObserver.onNext(DirectStreamObserver.java:98)
        org.apache.beam.sdk.fn.stream.SynchronizedStreamObserver.onNext(SynchronizedStreamObserver.java:46)
        org.apache.beam.runners.fnexecution.control.FnApiControlClient.handle(FnApiControlClient.java:84)
        org.apache.beam.runners.dataflow.worker.fn.control.RegisterAndProcessBundleOperation.start(RegisterAndProcessBundleOperation.java:254)
        org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:77)
        org.apache.beam.runners.dataflow.worker.fn.control.BeamFnMapTaskExecutor.execute(BeamFnMapTaskExecutor.java:125)
        org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1283)
        org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:147)
        org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:1020)
        java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        java.lang.Thread.run(Thread.java:745)

0 个答案:

没有答案