数据流停止流向BigQuery且没有错误

时间:2018-12-04 10:30:45

标签: google-cloud-platform google-bigquery google-cloud-dataflow apache-beam google-cloud-pubsub

我们开始使用Dataflow从PubSub读取并流式传输到BigQuery。 数据流应该24/7全天候运行,因为pubsub会不断更新世界各地多个网站的分析数据。

代码如下:

from __future__ import absolute_import

import argparse
import json
import logging

import apache_beam as beam
from apache_beam.io import ReadFromPubSub, WriteToBigQuery
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions

logger = logging.getLogger()

TABLE_IDS = {
    'table_1': 0,
    'table_2': 1,
    'table_3': 2,
    'table_4': 3,
    'table_5': 4,
    'table_6': 5,
    'table_7': 6,
    'table_8': 7,
    'table_9': 8,
    'table_10': 9,
    'table_11': 10,
    'table_12': 11,
    'table_13': 12
 }


def separate_by_table(element, num):
    return TABLE_IDS[element.get('meta_type')]


class ExtractingDoFn(beam.DoFn):
    def process(self, element):
        yield json.loads(element)


def run(argv=None):
    """Main entry point; defines and runs the wordcount pipeline."""
    logger.info('STARTED!')
    parser = argparse.ArgumentParser()
    parser.add_argument('--topic',
                        dest='topic',
                        default='projects/PROJECT_NAME/topics/TOPICNAME',
                        help='Gloud topic in form "projects/<project>/topics/<topic>"')
    parser.add_argument('--table',
                        dest='table',
                        default='PROJECTNAME:DATASET_NAME.event_%s',
                        help='Gloud topic in form "PROJECT:DATASET.TABLE"')
    known_args, pipeline_args = parser.parse_known_args(argv)

    # We use the save_main_session option because one or more DoFn's in this
    # workflow rely on global context (e.g., a module imported at module level).
    pipeline_options = PipelineOptions(pipeline_args)
    pipeline_options.view_as(SetupOptions).save_main_session = True
    p = beam.Pipeline(options=pipeline_options)

    lines = p | ReadFromPubSub(known_args.topic)
    datas = lines | beam.ParDo(ExtractingDoFn())
    by_table = datas | beam.Partition(separate_by_table, 13)

    # Create a stream for each table
    for table, id in TABLE_IDS.items():
        by_table[id] | 'write to %s' % table >> WriteToBigQuery(known_args.table % table)

    result = p.run()
    result.wait_until_finish()


if __name__ == '__main__':
    logger.setLevel(logging.INFO)
    run()

它可以正常工作,但是过了一段时间(2-3天),由于某种原因它停止了流式传输。 当我检查作业状态时,它在日志部分中没有错误(您知道,在数据流的作业详细信息中标记为红色“!”的错误)。如果我取消作业并再次运行-与往常一样,它将重新开始工作。 如果我检查Stackdriver是否有其他日志,那么这里是发生的所有错误: Errors list 以下是作业执行期间定期发生的一些警告: Warnings list 其中之一的详细信息:

 {
 insertId: "397122810208336921:865794:0:479132535"  

jsonPayload: {
  exception: "java.lang.IllegalStateException: Cannot be called on unstarted operation.
    at com.google.cloud.dataflow.worker.fn.data.RemoteGrpcPortWriteOperation.getElementsSent(RemoteGrpcPortWriteOperation.java:111)
    at com.google.cloud.dataflow.worker.fn.control.BeamFnMapTaskExecutor$SingularProcessBundleProgressTracker.updateProgress(BeamFnMapTaskExecutor.java:293)
    at com.google.cloud.dataflow.worker.fn.control.BeamFnMapTaskExecutor$SingularProcessBundleProgressTracker.periodicProgressUpdate(BeamFnMapTaskExecutor.java:280)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
"   
  job: "2018-11-30_10_35_19-13557985235326353911"   
  logger: "com.google.cloud.dataflow.worker.fn.control.BeamFnMapTaskExecutor"   
  message: "Progress updating failed 4 times. Following exception safely handled."   
  stage: "S0"   
  thread: "62"   
  work: "c-8756541438010208464"   
  worker: "beamapp-vitar-1130183512--11301035-mdna-harness-lft7"   
 }

labels: {
  compute.googleapis.com/resource_id: "397122810208336921"   
  compute.googleapis.com/resource_name: "beamapp-vitar-1130183512--11301035-mdna-harness-lft7"   
  compute.googleapis.com/resource_type: "instance"   
  dataflow.googleapis.com/job_id: "2018-11-30_10_35_19-13557985235326353911"   
  dataflow.googleapis.com/job_name: "beamapp-vitar-1130183512-742054"   
  dataflow.googleapis.com/region: "europe-west1"   
 }
 logName: "projects/PROJECTNAME/logs/dataflow.googleapis.com%2Fharness"  
 receiveTimestamp: "2018-12-03T20:33:00.444208704Z"  

resource: {

labels: {
   job_id: "2018-11-30_10_35_19-13557985235326353911"    
   job_name: "beamapp-vitar-1130183512-742054"    
   project_id: PROJECTNAME
   region: "europe-west1"    
   step_id: ""    
  }
  type: "dataflow_step"   
 }
 severity: "WARNING"  
 timestamp: "2018-12-03T20:32:59.442Z"  
}

这时似乎开始出现问题: Problem arised 其他信息消息可能有帮助: Info messages

根据这些消息,我们不会耗尽内存/处理能力等。使用以下参数运行作业:

python -m start --streaming True --runner DataflowRunner --project PROJECTNAME --temp_location gs://BUCKETNAME/tmp/ --region europe-west1 --disk_size_gb 30 --machine_type n1-standard-1 --use_public_ips false --num_workers 1 --max_num_workers 1 --autoscaling_algorithm NONE

这里可能是什么问题?

3 个答案:

答案 0 :(得分:1)

这并不是真正的答案,更有助于确定原因:到目前为止,我使用python SDK启动的所有流数据流作业在几天后都已停止,无论它们是否使用BigQuery作为接收器。因此,原因似乎是streaming jobs with the python SDK are still in beta的普遍事实。

我的个人解决方案:使用数据流模板从Pub / Sub流到BigQuery(从而避免使用python SDK),然后在BigQuery中安排查询以定期处理数据。不幸的是,这可能不适用于您的用例。

答案 1 :(得分:0)

在我的公司中,我们遇到了OP所述的相同问题,并且具有相似的用例。

不幸的是,这个问题是真实的,具体的,而且显然是随机发生的。

作为一种解决方法,我们正在考虑使用Java SDK重写管道。

答案 2 :(得分:0)

我有一个与此类似的问题,发现警告日志包含在建议错误的Java日志中隐藏的python Stack跟踪。

这些错误不断被工作人员重试,导致他们崩溃并完全冻结了管道。起初我以为工人人数太少,所以增加了工人人数,但是冻结管道的时间更长。

我在本地运行管道,并将pubsub消息作为文本导出,并确定它们包含脏数据(与BQ表架构不匹配的消息),并且由于我没有异常处理,这似乎是导致流水线的原因冻结。

添加功能仅接受第一个键与BQ架构的预期列匹配的记录,此记录解决了我的问题,并且数据流作业一直在运行,没有任何问题在继续。

def bad_records(row):
    if 'key1' in row:
        yield row
    else:
        print('bad row',row)


|'exclude bad records' >> beam.ParDo(bad_records)