我有一个看似简单的场景,我使用python dataflow使用big-query查询数据。
当bq查询返回零行时,我遇到一个AssertionError,脚本&断言错误如下所示。我想知道这是一个错误还是有一种推荐的方法来处理py数据流中bq阅读器的零行?
数据流脚本:
from apache_beam.io import WriteToText
from apache_beam.typehints import Any, Dict
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
p = beam.Pipeline(options=pipeline_options)
BIGQUERY_ROW_TYPE = Dict[str, Any]
# construct a bigquery SQL
query_sql = Query().build_sql()
lines = p \
| 'read from bigquery' >> beam.io.Read(beam.io.BigQuerySource(query=query_sql, validate=True)).with_output_types(BIGQUERY_ROW_TYPE) \
| 'write to test' >> WriteToText(known_args.output)
result = p.run()
查询返回零行时看到的错误:
(98b5a6e4c0cd002e): Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 581, in do_work
work_executor.execute()
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", line 166, in execute
op.start()
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/native_operations.py", line 48, in start
for value in reader:
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/nativefileio.py", line 186, in __iter__
for eof, record, delta_offset in self.read_records():
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/nativeavroio.py", line 102, in read_records
assert block.num_records() > 0
AssertionError
`2017-06-27 (13:55:58) Workflow failed. Causes: (7390b72dc5ceedb6): S04:read from bigquery+write to test/Write/WriteImpl/Wr...
(bb74ab934e658b06): Workflow failed. Causes: (7390b72dc5ceedb6): S04:read
from bigquery+write to test/Write/WriteImpl/WriteBundles/Do+write to
test/Write/WriteImpl/Pair+write to
test/Write/WriteImpl/WindowInto(WindowIntoFn)+write to
test/Write/WriteImpl/GroupByKey/Reify+write to
test/Write/WriteImpl/GroupByKey/Write failed.`
答案 0 :(得分:1)
这是一个错误,并已修复(由@jkff)。该修复程序将在下一个Dataflow版本中提供 - 大约需要3-5周。