在我的情况下,为什么Spark不对Spark和ADLS Gen 2(blob存储)进行谓词下推?从like this的文章中我知道ADLS Gen 2支持谓词下推。 但是Spark是否可以正确地与ADLS Gen 2 API配合使用以确保谓词下推?
# Folder structure example `<base_path>/NAME=ABC/YEAR=2019/MONTH=05/part-xxxx-.snappy.parquet`
spark_session.read \
.parquet('<base_path>') \
.filter((F.col('NAME') == 'ABC') & (F.col('YEAR') == 2019) & (F.col('MONTH') == 5)) \
.explain()
从输出中您可以看到PySpark使用简单的RemoteServiceExec filter
而不是PushedFilter
。 AFAIK,这意味着Spark不会在查询中使用谓词下推。
P.S。我正在通过databricks-connect以客户端部署模式运行作业
输出:
RemoteServiceExec filter {
condition {
sql_expr {
sql_repr: "((('#non-sql-expr-183d4706-d0e7-476a-823c-efe0de41914a' = 'ABC') AND ('#non-sql-expr-823ed3f2-b08f-4d00-88c1-29f45d08c43e' = 2019)) AND ('#non-sql-expr-a361bb1a-085f-4948-8fa5-7e1e14f835ee' = 5))"
embedded_non_sql_exprs {
uuid: "#non-sql-expr-823ed3f2-b08f-4d00-88c1-29f45d08c43e"
expr {
named_expr {
attr {
attr_ref {
name: "YEAR"
dataType {
sql_repr: "INT"
}
nullable: true
metadata {
json_repr: "{}"
}
expr_id {
id: 39
jvm_id: "929051d7-3030-4e2c-bf57-923aa610f40e"
}
}
}
}
}
}
embedded_non_sql_exprs {
uuid: "#non-sql-expr-183d4706-d0e7-476a-823c-efe0de41914a"
expr {
named_expr {
attr {
attr_ref {
name: "NAME"
dataType {
sql_repr: "STRING"
}
nullable: true
metadata {
json_repr: "{}"
}
expr_id {
id: 38
jvm_id: "929051d7-3030-4e2c-bf57-923aa610f40e"
}
}
}
}
}
}
embedded_non_sql_exprs {
uuid: "#non-sql-expr-a361bb1a-085f-4948-8fa5-7e1e14f835ee"
expr {
named_expr {
attr {
attr_ref {
name: "MONTH"
dataType {
sql_repr: "INT"
}
nullable: true
metadata {
json_repr: "{}"
}
expr_id {
id: 40
jvm_id: "929051d7-3030-4e2c-bf57-923aa610f40e"
}
}
}
}
}
}
}
}
child {
logical_relation {
relation {
data_frame_read_spec {
user_specified_source: "parquet"
paths: "abfss://<base_path>"
}
}
schema {
...
}
uuid: "da53053d-5e34-4c8d-8bab-f7a30dfefde4"
}
}
}
serialization_context {
}