如何使用我的平等比较器对Spark DataFrame进行分组?

时间:2019-03-13 16:40:12

标签: scala sorting apache-spark apache-spark-sql

我想在DataFrame上使用带有自己的相等比较器的GroupBy运算符。

让我们假设我想执行以下操作:

df.groupBy("Year","Month").sum("Counter")

在此DataFrame中:

Year    | Month      | Counter  
---------------------------
2012    | Jan        | 100          
12      | January    | 200       
12      | Janu       | 300       
2012    | Feb        | 400       
13      | Febr       | 500

我必须实现两个比较器:

1)对于年份列:p.e. “ 2012” ==“ 12”

2)对于月份列:p.e。 “ Jan” ==“ January” ==“ Janu”

让我们假设我已经实现了这两个比较器。我该如何调用它们?就像在this示例中一样,我已经知道我必须将DataFrame转换为RDD,才能使用比较器。

我考虑过使用RDD GroupBy

请注意,我确实需要使用比较器来完成此操作。我不能使用UDF,更改数据或创建新列。未来的想法是拥有密文列,其中有一些函数可以让我比较两个密文是否相同。我想在比较器中使用它们。

编辑:

此刻,我试图仅用一列来完成此操作,例如:

df.groupBy("Year").sum("Counter")

我有一个包装器类:

class ExampleWrapperYear (val year: Any) extends Serializable {
      // override hashCode and Equals methods
}

然后,我正在这样做:

val rdd = df.rdd.keyBy(a => new ExampleWrapperYear(a(0))).groupByKey()

我的问题是如何进行“求和”,以及如何对多列使用keyBy以使用ExampleWrapperYear和ExampleWrapperMonth。

2 个答案:

答案 0 :(得分:1)

您可以使用udfs来实现使其成为标准的年/月格式的逻辑

  def toYear : (Integer) => Integer = (year:Integer)=>{
    2000 + year % 100 //assuming all years in 2000-2999 range
  }

  def toMonth : (String) => String = (month:String)=>{
    month match {
      case "January"=> "Jan"
      case "Janu"=> "Jan"
      case "February" => "Feb"
      case "Febr" => "Feb"
      case _ => month
    }
  }

  val toYearUdf = udf(toYear)
  val toMonthUdf = udf(toMonth)

  df.groupBy( toYearUdf(col("Year")), toMonthUdf(col("Month"))).sum("Counter").show()

答案 1 :(得分:1)

此解决方案应该有效。以下是实现hashCode和equals的案例类(我们可以将它们称为比较器)。

您可以基于不同的密文修改/更新hashCode和等值符

  case class Year(var year:Int){

    override def hashCode(): Int = {
      this.year = this.year match {
        case 2012 => 2012
        case 12 => 2012
        case 13 => 2013
        case _ => this.year
      }
      this.year.hashCode()
    }

    override def equals(that: Any): Boolean ={
      val year1 = 2000 + that.asInstanceOf[Year].year % 100
      val year2 = 2000 + this.year % 100
      if (year1 == year2)
        true
      else
        false
    }
  }

  case class Month(var month:String){

    override def hashCode(): Int = {
      this.month = this.month match {
        case "January" => "Jan"
        case "Janu" => "Jan"
        case "February" => "Feb"
        case "Febr" => "Feb"
        case _ => this.month
      }
      this.month.hashCode
    }

    override def equals(that: Any): Boolean ={
      val month1 = this.month match {
        case "January" => "Jan"
        case "Janu" => "Jan"
        case "February" => "Feb"
        case "Febr" => "Feb"
        case _ => this.month
      }
      val month2 = that.asInstanceOf[Month].month match {
        case "January" => "Jan"
        case "Janu" => "Jan"
        case "February" => "Feb"
        case "Febr" => "Feb"
        case _ => that.asInstanceOf[Month].month
      }
      if (month1.equals(month2))
        true
      else
        false
    }
  }

这是分组键的重要比较器,仅使用单个col比较器

  case class Key(var year:Year, var month:Month){

    override def hashCode(): Int ={
      this.year.hashCode() + this.month.hashCode()
    }

    override def equals(that: Any): Boolean ={
      if ( this.year.equals(that.asInstanceOf[Key].year) && this.month.equals(that.asInstanceOf[Key].month))
        true
      else
        false
    }
  }

  case class Record(year:Int,month:String,counter:Int)

  val df = spark.read.format("com.databricks.spark.csv")
      .option("header", "true")
      .option("inferSchema", "true")
      .load("data.csv").as[Record]

  df.rdd.groupBy[Key](
      (record:Record)=>Key(Year(record.year), Month(record.month)))
      .map(x=> Record(x._1.year.year, x._1.month.month, x._2.toList.map(_.counter).sum))
      .toDS().show()

给出

+----+-----+-------+
|year|month|counter|
+----+-----+-------+
|2012|  Feb|    800|
|2013|  Feb|    500|
|2012|  Jan|    700|
+----+-----+-------+

for this input in data.csv

Year,Month,Counter
2012,February,400
2012,Jan,100
12,January,200
12,Janu,300
2012,Feb,400
13,Febr,500
2012,Jan,100

请注意,对于“年”和“月”案例类,还将其值更新为标准值(否则将无法预测选择哪个值)。