如何在从JDBC连接读取时使用谓词?

时间:2017-07-31 16:26:04

标签: r apache-spark jdbc sparklyr

默认情况下,spark_read_jdbc()将整个数据库表读入Spark。我使用以下语法创建了这些连接。

library(sparklyr)
library(dplyr)

config <- spark_config()
config$`sparklyr.shell.driver-class-path` <- "mysql-connector-java-5.1.43/mysql-connector-java-5.1.43-bin.jar"

sc <- spark_connect(master         = "local",
                    version        = "1.6.0",
                    hadoop_version = 2.4,
                    config         = config)

db_tbl <- sc %>%
  spark_read_jdbc(sc      = .,
                  name    = "table_name",  
                  options = list(url      = "jdbc:mysql://localhost:3306/schema_name",
                                 user     = "root",
                                 password = "password",
                                 dbtable  = "table_name"))

但是,我现在遇到的情况是我在MySQL数据库中有一个表,我宁愿只将这个表的一个子集读入Spark。

如何让spark_read_jdbc接受谓词?我尝试将谓词添加到选项列表中但没有成功,

db_tbl <- sc %>%
  spark_read_jdbc(sc      = .,
                  name    = "table_name",  
                  options = list(url      = "jdbc:mysql://localhost:3306/schema_name",
                                 user       = "root",
                                 password   = "password",
                                 dbtable    = "table_name",
                                 predicates = "field > 1"))

1 个答案:

答案 0 :(得分:3)

您可以将dbtable替换为查询:

db_tbl <- sc %>%
  spark_read_jdbc(sc      = .,
              name    = "table_name",  
              options = list(url      = "jdbc:mysql://localhost:3306/schema_name",
                             user     = "root",
                             password = "password",
                             dbtable  = "(SELECT * FROM table_name WHERE field > 1) as my_query"))

但是像这样简单的条件Spark会在你过滤时自动推送它:

db_tbl %>% filter(field > 1)

请务必设置:

memory = FALSE
spark_read_jdbc中的