Spark RDD分区程序分区在RDD中找不到

时间:2019-08-18 18:55:33

标签: scala apache-spark

学习自定义Spark RDD分区程序,编码了一些逻辑,但是没有编译。

在Spark 2.4.3中,启动spark shell:

case class Transaction(name:String, amount:Double, country:String)
val transactions = Seq(
 Transaction("Bob", 100, "UK"),
 Transaction("James", 15, "UK"),
 Transaction("Marek", 51, "US"),
 Transaction("Paul", 57, "US")
)

import org.apache.spark.Partitioner
class CountryPartitioner(override val numPartitions: Int) extends Partitioner { 
  def getPartition(key: Any): Int = key match { 
     case s: Transaction => s.country.hashCode % numPartitions  
  }  
  override def equals(other: Any): Boolean = other.isInstanceOf[CountryPartitioner]  
  override def hashCode: Int = 0
}

val rdd = sc.parallelize(transactions).partitionBy(new CountryPartitioner(2))

错误是

error: value partitionBy is not a member of org.apache.spark.rdd.RDD[Transaction]
       rdd.partitionBy(new CountryPartitioner(2))
           ^

我从网上了解到,此代码可以正常运行。我的代码与该代码几乎相同,不同之处在于Transaction类...不知道为什么我的代码无法正常工作。甚至我也不能为此在线RDD api。

import org.apache.spark.Partitioner
class TwoPartsPartitioner(override val numPartitions: Int) extends Partitioner { def getPartition(key: Any): Int = key match { case s: String => {if (s(0).toUpper > 'J') 1 else 0 } }
override def equals(other: Any): Boolean = other.isInstanceOf[TwoPartsPartitioner]
override def hashCode: Int = 0
}

var x = sc.parallelize(Array(("sandeep",1),("giri",1),("abhishek",1),("sravani",1),("jude",1)), 3)
var y = x.partitionBy(new TwoPartsPartitioner(2))

来源:https://gist.github.com/girisandeep/f90e456da6f2381f9c86e8e6bc4e8260

1 个答案:

答案 0 :(得分:1)

这将不起作用,因为您需要一个键值对才能使RDD分区起作用。 Spark中的消息有时会有些模糊。交易类别不是KV对。

请参见Partitioning of Data Frame in Pyspark using Custom Partitioner,是另一个答案,不是我的。

RDD上的许多操作都是面向KV对的,例如加入,不是特别方便。