我是Spark SQL的新手,并使用解释来学习它如何优化代码。我假设在WITH中定义并多次引用的表仅计算一次。
但是,根据以下解释的优化逻辑计划,表location_with_count出现在不同的树中。
这是否意味着它将被计算两次,或者这仅仅是计划说明的显示问题。
In [24]: sql = """
...: WITH location_with_count AS (
...: SELECT uid, country_code, city_code, count() over (PARTITION BY country_code, city_code) as c
...: FROM location
...: ),
...:
...: rs AS (
...: SELECT uid, country_code, city_code,
...: row_number() over (PARTITION BY country_code, city_code
...: ORDER BY uid DESC) AS Rank
...: FROM location_with_count as uc
...: WHERE uc.c > 10
...: )
...:
...: (SELECT uid, country_code, city_code FROM rs WHERE Rank <= 10)
...: union
...: (SELECT uid, country_code, city_code FROM location_with_count WHERE c <= 10)
...: """
In [25]: session.sql(sql).explain(True)
== Parsed Logical Plan ==
CTE [location_with_count, rs]
: :- 'SubqueryAlias location_with_count
: : +- 'Project ['uid, 'country_code, 'city_code, 'count() windowspecdefinition('country_code, 'city_code, UnspecifiedFrame) AS c#281]
: : +- 'UnresolvedRelation `location`
: +- 'SubqueryAlias rs
: +- 'Project ['uid, 'country_code, 'city_code, 'row_number() windowspecdefinition('country_code, 'city_code, 'uid DESC NULLS LAST, UnspecifiedFrame) AS Rank#282]
: +- 'Filter ('uc.c > 10)
: +- 'SubqueryAlias uc
: +- 'UnresolvedRelation `location_with_count`
+- 'Distinct
+- 'Union
:- 'Project ['uid, 'country_code, 'city_code]
: +- 'Filter ('Rank <= 10)
: +- 'UnresolvedRelation `rs`
+- 'Project ['uid, 'country_code, 'city_code]
+- 'Filter ('c <= 10)
+- 'UnresolvedRelation `location_with_count`
== Analyzed Logical Plan ==
uid: bigint, country_code: string, city_code: string
Distinct
+- Union
:- Project [uid#283L, country_code#284, city_code#287]
: +- Filter (Rank#282 <= 10)
: +- SubqueryAlias rs
: +- Project [uid#283L, country_code#284, city_code#287, Rank#282]
: +- Project [uid#283L, country_code#284, city_code#287, Rank#282, Rank#282]
: +- Window [row_number() windowspecdefinition(country_code#284, city_code#287, uid#283L DESC NULLS LAST, ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS Rank#282], [country_code#284, city_code#287], [uid#283L DESC NULLS LAST]
: +- Project [uid#283L, country_code#284, city_code#287]
: +- Filter (c#281L > cast(10 as bigint))
: +- SubqueryAlias uc
: +- SubqueryAlias location_with_count
: +- Project [uid#283L, country_code#284, city_code#287, c#281L]
: +- Project [uid#283L, country_code#284, city_code#287, c#281L, c#281L]
: +- Window [count() windowspecdefinition(country_code#284, city_code#287, ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS c#281L], [country_code#284, city_code#287]
: +- Project [uid#283L, country_code#284, city_code#287]
: +- SubqueryAlias location
: +- Relation[uid#283L,country_code#284,city_code#287] parquet
+- Project [uid#283L, country_code#284, city_code#287]
+- Filter (c#281L <= cast(10 as bigint))
+- SubqueryAlias location_with_count
+- Project [uid#283L, country_code#284, city_code#287, c#281L]
+- Project [uid#283L, country_code#284, city_code#287, c#281L, c#281L]
+- Window [count() windowspecdefinition(country_code#284, city_code#287, ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS c#281L], [country_code#284, city_code#287]
+- Project [uid#283L, country_code#284, city_code#287]
+- SubqueryAlias location
+- Relation[uid#283L,country_code#284,city_code#287] parquet
== Optimized Logical Plan ==
Aggregate [uid#283L, country_code#284, city_code#287], [uid#283L, country_code#284, city_code#287]
+- Union
:- Project [uid#283L, country_code#284, city_code#287]
: +- Filter (isnotnull(Rank#282) && (Rank#282 <= 10))
: +- Window [row_number() windowspecdefinition(country_code#284, city_code#287, uid#283L DESC NULLS LAST, ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS Rank#282], [country_code#284, city_code#287], [uid#283L DESC NULLS LAST]
: +- Project [uid#283L, country_code#284, city_code#287]
: +- Filter (c#281L > 10)
: +- Window [0 AS c#281L], [country_code#284, city_code#287]
: +- Project [uid#283L, country_code#284, city_code#287]
: +- Relation[uid#283L,country_code#284,city_code#287] parquet
+- Project [uid#283L, country_code#284, city_code#287]
+- Filter (c#281L <= 10)
+- Window [0 AS c#281L], [country_code#284, city_code#287]
+- Project [uid#283L, country_code#284, city_code#287]
+- Relation[uid#283L,country_code#284,city_code#287] parquet
== Physical Plan ==
*HashAggregate(keys=[uid#283L, country_code#284, city_code#287], functions=[], output=[uid#283L, country_code#284, city_code#287])
+- Exchange hashpartitioning(uid#283L, country_code#284, city_code#287, 200)
+- *HashAggregate(keys=[uid#283L, country_code#284, city_code#287], functions=[], output=[uid#283L, country_code#284, city_code#287])
+- Union
:- *Project [uid#283L, country_code#284, city_code#287]
: +- *Filter (isnotnull(Rank#282) && (Rank#282 <= 10))
: +- Window [row_number() windowspecdefinition(country_code#284, city_code#287, uid#283L DESC NULLS LAST, ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS Rank#282], [country_code#284, city_code#287], [uid#283L DESC NULLS LAST]
: +- *Sort [country_code#284 ASC NULLS FIRST, city_code#287 ASC NULLS FIRST, uid#283L DESC NULLS LAST], false, 0
: +- *Project [uid#283L, country_code#284, city_code#287]
: +- *Filter (c#281L > 10)
: +- Window [0 AS c#281L], [country_code#284, city_code#287]
: +- *Sort [country_code#284 ASC NULLS FIRST, city_code#287 ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(country_code#284, city_code#287, 200)
: +- *Project [uid#283L, country_code#284, city_code#287]
: +- *FileScan parquet default.location[uid#283L,country_code#284,city_code#287] Batched: true, Format: Parquet, Location: InMemoryFileIndex[.../location], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<uid:bigint,country_code:string,city_code:string>
+- *Project [uid#283L, country_code#284, city_code#287]
+- *Filter (c#281L <= 10)
+- Window [0 AS c#281L], [country_code#284, city_code#287]
+- *Sort [country_code#284 ASC NULLS FIRST, city_code#287 ASC NULLS FIRST], false, 0
+- ReusedExchange [uid#283L, country_code#284, city_code#287], Exchange hashpartitioning(country_code#284, city_code#287, 200)
在物理计划中,我明白了
ReusedExchange [uid#283L, country_code#284, city_code#287], Exchange hashpartitioning(country_code#284, city_code#287, 200)
它实际上表明location_with_count已被重用吗?
答案 0 :(得分:1)
SubqueryAlias
逻辑运算符最终将通过EliminateSubqueryAliases
逻辑优化而消除。别名是指向查询相同部分的指针(引用),并不参与执行。
您可能会在EliminateSubqueryAliases Logical Optimization中找到一些信息。
ReuseSubquery
物理查询优化应该避免多次执行子查询。
您可以在ReuseSubquery Physical Query Optimization中找到一些信息。
它实际上表明location_with_count已被重用吗?
我希望如此。