数据流管道停留在从Pub / Sub读取中

时间:2019-04-22 08:16:12

标签: python google-cloud-dataflow google-cloud-pubsub google-cloud-console

经过一天的正常工作,从Pub / Sub流数据,将数据展平并将行写入BigQuery;数据流管道已开始报告以下错误:


Processing stuck in step s01 for at least 05m00s without outputting or completing in state process
  at sun.misc.Unsafe.park(Native Method)
  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
  at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
  at org.apache.beam.runners.dataflow.worker.fn.data.RemoteGrpcPortWriteOperation.maybeWait(RemoteGrpcPortWriteOperation.java:170)
  at org.apache.beam.runners.dataflow.worker.fn.data.RemoteGrpcPortWriteOperation.process(RemoteGrpcPortWriteOperation.java:191)
  at org.apache.beam.runners.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:49)
  at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:201)
  at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:159)
  at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:77)
  at org.apache.beam.runners.dataflow.worker.fn.control.BeamFnMapTaskExecutor.execute(BeamFnMapTaskExecutor.java:125)
  at org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1269)
  at org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:146)
  at org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:1008)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at java.lang.Thread.run(Thread.java:745)

这些错误会延长时间,并以相同的错误跟踪达到25m00s

通过Stackdriver,我没有任何运气,因为这些错误没有显示。

这是我的管道:

from __future__ import absolute_import

import logging
import argparse
import apache_beam as beam
import apache_beam.transforms.window as window


class parse_pubsub(beam.DoFn):
    def process(self, element):
        # Flatten data ...
        for row in final_rows:
            yield row


def run(argv=None):
    """Build and run the pipeline."""
    parser = argparse.ArgumentParser()
    parser.add_argument(
        '--input_topic', required=True,
        help='Input PubSub topic of the form "/topics/<PROJECT>/<TOPIC>".')
    parser.add_argument(
        '--output_table', required=True,
        help=('Output BigQuery table for results specified as: PROJECT:DATASET.TABLE '
       'or DATASET.TABLE.'))
    known_args, pipeline_args = parser.parse_known_args(argv)

    # table_schema = '-------'

    with beam.Pipeline(argv=pipeline_args) as p:
        lines = ( p | 'Read from PubSub' >> beam.io.ReadFromPubSub(known_args.input_topic)
                    | 'Parse data' >> beam.ParDo(parse_pubsub())
                    | 'Write to BigQuery' >> beam.io.WriteToBigQuery(
                        known_args.output_table,
                        schema=table_schema,
                        create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED
                    )
                )


if __name__ == '__main__':
  logging.getLogger().setLevel(logging.INFO)
  run()

这可能是工人问题吗?我应该从更多的工人那里开始工作吗?代码中可以防止的事情吗?

1 个答案:

答案 0 :(得分:2)

不幸的是,Python Streaming Dataflow作业仍处于Beta中。 Beta的局限性之一是,多个IO连接器正在Dataflow后端上运行,并且用户无法访问日志。

至少有一个问题,我见过BEAM-5791的类似堆栈跟踪,该问题已在2.9.0中修复。如果还没有,请尝试升级到Beam的最新版本。

另一个常见原因是权限问题。确保数据流服务帐户仍然有权访问您的pubsub主题。

如果在此之后仍然遇到问题,则需要在Google云支持下提交故障单。他们可以查看您的工作的后端日志,并帮助您找到问题的原因。