在我的spark应用程序中对未完全指定的错误进行分区

时间:2017-04-15 15:31:39

标签: apache-spark partitioning

请看下面的代码。    当我传递分区数量的值时,我收到以下代码的错误。

      def loadDataFromPostgress(sqlContext: SQLContext, tableName: String, 
         columnName: String, dbURL: String, userName: String, pwd: String, 
         partitions: String): DataFrame = {
        println("the no of partitions are : "+partitions)
        var dataDF = sqlContext.read.format("jdbc").options(
        scala.collection.Map("url" -> dbURL,
                          "dbtable" -> tableName,
                      "driver" -> "org.postgresql.Driver",
                   "user" -> userName,
                 "password" -> pwd,
                   "partitionColumn" -> columnName,
               "numPartitions" -> "1000")).load()
                return dataDF
                        }

错误:

                java.lang.RuntimeException: Partitioning incompletely specified
                App > at scala.sys.package$.error(package.scala:27)
                App > at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:38)
                App > at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:315)
                App > at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149)
    App > at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:122)
                App > at Test$.loadDataFromGreenPlum(script.scala:28)
                App > at Test$.loadDataFrame(script.scala:15)
                App > at Test$.main(script.scala:59)
                App > at Test.main(script.scala)
                App > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
                  Method)
                App > at 

2 个答案:

答案 0 :(得分:3)

您可以在下面查看代码的具体用途。

def loadDataFromPostgress(sqlContext: SQLContext, tableName: String,
                            columnName: String, dbURL: String, userName: String,
                            pwd: String, partitions: String): DataFrame = {
    println("the no of partitions are : " + partitions)
    var dataDF = sqlContext.read.format("jdbc").options(
      scala.collection.Map("url" -> dbURL,
        "dbtable" -> "(select mod(tmp.empid,10)  as hash_code,tmp.* from employee as tmp) as t",
        "driver" -> "org.postgresql.Driver",
        "user" -> userName,
        "password" -> pwd,
        "partitionColumn" -> hash_code,
        "lowerBound" -> 0,
        "upperBound" -> 10
    "numPartitions" -> "10"
    ) ).load()
    return dataDF
  }

上面的代码将创建10个任务,包含10个查询,如下所示。 在找到工作之前

  

offset =(upperBound-lowerBound)/ numPartitions

此处offset = (10-0)/10 = 1

select mod(tmp.empid,10)  as hash_code,tmp.* from employee as tmp where hash_code between 0 between 1
select mod(tmp.empid,10)  as hash_code,tmp.* from employee as tmp where hash_code between 1 between 2
.
.
select mod(tmp.empid,10)  as hash_code,tmp.* from employee as tmp where hash_code between 9 between 10

这将创建10个分区和

empid以0结尾将进入一个分区,因为mod(empid,10)总是等于0

empid以1结尾将进入一个分区,因为mod(empid,10)总是等于1

像这样,所有员工行都会被分成10个分区。

你必须根据你的要求更改partitionColumn,upperBound,lowerBound,numPartitions值。

希望我的回答可以帮到你。

答案 1 :(得分:0)

分区需要:

  • 分区列(整数)。
  • 列数
  • 列的下限
  • 列的上限

最后两个丢失了,这就是你得到错误的原因。