我有一个基本的Beam管道,该管道可从GCS读取,进行Beam SQL转换并将结果写入BigQuery。
当我在SQL语句中不进行任何聚合时,它可以正常工作:
..
PCollection<Row> outputStream =
sqlRows.apply(
"sql_transform",
SqlTransform.query("select views from PCOLLECTION"));
outputStream.setCoder(SCHEMA.getRowCoder());
..
但是,当我尝试使用总和进行聚合时,它会失败(抛出CannotPlanException
异常):
..
PCollection<Row> outputStream =
sqlRows.apply(
"sql_transform",
SqlTransform.query("select wikimedia_project, sum(views) from PCOLLECTION group by wikimedia_project"));
outputStream.setCoder(SCHEMA.getRowCoder());
..
Stacktrace:
Step #1: 11:47:37,562 0 [main] INFO org.apache.beam.runners.dataflow.DataflowRunner - PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 117 files. Enable logging at DEBUG level to see which files will be staged.
Step #1: 11:47:39,845 2283 [main] INFO org.apache.beam.sdk.extensions.sql.impl.BeamQueryPlanner - SQL:
Step #1: SELECT `PCOLLECTION`.`wikimedia_project`, SUM(`PCOLLECTION`.`views`)
Step #1: FROM `beam`.`PCOLLECTION` AS `PCOLLECTION`
Step #1: GROUP BY `PCOLLECTION`.`wikimedia_project`
Step #1: 11:47:40,387 2825 [main] INFO org.apache.beam.sdk.extensions.sql.impl.BeamQueryPlanner - SQLPlan>
Step #1: LogicalAggregate(group=[{0}], EXPR$1=[SUM($1)])
Step #1: BeamIOSourceRel(table=[[beam, PCOLLECTION]])
Step #1:
Step #1: Exception in thread "main" org.apache.beam.repackaged.beam_sdks_java_extensions_sql.org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node [rel#7:Subset#1.BEAM_LOGICAL.[]] could not be implemented; planner state:
Step #1:
Step #1: Root: rel#7:Subset#1.BEAM_LOGICAL.[]
Step #1: Original rel:
Step #1: LogicalAggregate(subset=[rel#7:Subset#1.BEAM_LOGICAL.[]], group=[{0}], EXPR$1=[SUM($1)]): rowcount = 10.0, cumulative cost = {11.375000476837158 rows, 0.0 cpu, 0.0 io}, id = 5
Step #1: BeamIOSourceRel(subset=[rel#4:Subset#0.BEAM_LOGICAL.[]], table=[[beam, PCOLLECTION]]): rowcount = 100.0, cumulative cost = {100.0 rows, 101.0 cpu, 0.0 io}, id = 2
Step #1:
Step #1: Sets:
Step #1: Set#0, type: RecordType(VARCHAR wikimedia_project, BIGINT views)
Step #1: rel#4:Subset#0.BEAM_LOGICAL.[], best=rel#2, importance=0.81
Step #1: rel#2:BeamIOSourceRel.BEAM_LOGICAL.[](table=[beam, PCOLLECTION]), rowcount=100.0, cumulative cost={100.0 rows, 101.0 cpu, 0.0 io}
Step #1: rel#10:Subset#0.ENUMERABLE.[], best=rel#9, importance=0.405
Step #1: rel#9:BeamEnumerableConverter.ENUMERABLE.[](input=rel#4:Subset#0.BEAM_LOGICAL.[]), rowcount=100.0, cumulative cost={1.7976931348623157E308 rows, 1.7976931348623157E308 cpu, 1.7976931348623157E308 io}
Step #1: Set#1, type: RecordType(VARCHAR wikimedia_project, BIGINT EXPR$1)
Step #1: rel#6:Subset#1.NONE.[], best=null, importance=0.9
Step #1: rel#5:LogicalAggregate.NONE.[](input=rel#4:Subset#0.BEAM_LOGICAL.[],group={0},EXPR$1=SUM($1)), rowcount=10.0, cumulative cost={inf}
Step #1: rel#7:Subset#1.BEAM_LOGICAL.[], best=null, importance=1.0
Step #1: rel#8:AbstractConverter.BEAM_LOGICAL.[](input=rel#6:Subset#1.NONE.[],convention=BEAM_LOGICAL,sort=[]), rowcount=10.0, cumulative cost={inf}
Step #1:
Step #1:
Step #1: at org.apache.beam.repackaged.beam_sdks_java_extensions_sql.org.apache.calcite.plan.volcano.RelSubset$CheapestPlanReplacer.visit(RelSubset.java:448)
Step #1: at org.apache.beam.repackaged.beam_sdks_java_extensions_sql.org.apache.calcite.plan.volcano.RelSubset.buildCheapestPlan(RelSubset.java:298)
Step #1: at org.apache.beam.repackaged.beam_sdks_java_extensions_sql.org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:666)
Step #1: at org.apache.beam.repackaged.beam_sdks_java_extensions_sql.org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:368)
Step #1: at org.apache.beam.repackaged.beam_sdks_java_extensions_sql.org.apache.calcite.prepare.PlannerImpl.transform(PlannerImpl.java:336)
Step #1: at org.apache.beam.sdk.extensions.sql.impl.BeamQueryPlanner.convertToBeamRel(BeamQueryPlanner.java:138)
Step #1: at org.apache.beam.sdk.extensions.sql.impl.BeamSqlEnv.parseQuery(BeamSqlEnv.java:105)
Step #1: at org.apache.beam.sdk.extensions.sql.SqlTransform.expand(SqlTransform.java:96)
Step #1: at org.apache.beam.sdk.extensions.sql.SqlTransform.expand(SqlTransform.java:79)
Step #1: at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:537)
Step #1: at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:488)
Step #1: at org.apache.beam.sdk.values.PCollection.apply(PCollection.java:338)
Step #1: at org.polleyg.TemplatePipeline.main(TemplatePipeline.java:59)
Step #1: :run FAILED
Step #1:
Step #1: FAILURE: Build failed with an exception.
我正在使用Beam 2.6.0
我缺少明显的东西吗?
答案 0 :(得分:0)
这应该有效,这是一个错误。提起BEAM-5384。
如果查看该计划,则该计划具有LogicalAggregate
操作,该操作代表汇总,需要由Beam实施。由于Beam的工作方式,要实现聚合,还需要从LogicalProject
操作中提取一些信息,这些信息表示select f1, f2
中的字段访问,而这就是这里所缺少的。还不是很清楚,这是否是过度优化查询并从计划中删除了投影的错误,还是Beam应该支持的有效用例。
我的一个建议是尝试修改select
子句,例如重新排列字段,添加更多字段。
更新:
至少有一个问题导致此。基本上,当您的模式仅包含您在查询中使用的字段时,则不需要投影,并且Calcite不会将其添加到计划中。但是,Beam聚合需要一个投影节点来从中提取开窗信息(这是当前的实现方式,这样做可能不正确)。
解决方法: 因此,为了解决特定的查询,您可以在架构中添加额外的字段,而不在查询中使用它们,这将导致方解石将投影节点添加到计划中,并将应用Beam SQL Aggregation。
Beam HEAD现在已解决以下特定问题:https://github.com/apache/beam/commit/8c35781d62846211e43b6b122b557f8c3fdaec6d#diff-4f4ffa265fe666e99c37c346d50da67dR637