Scala中条件生成的动态

时间:2018-10-31 04:50:11

标签: sql scala apache-spark apache-spark-sql

我必须根据案例类/数据框生成where条件。

例如,我将具有以下示例数据,我可以从具有4列的案例类/数据框中获取这些数据,这些数据包含大量数据,我必须根据id进行过滤。对于ID,我必须生成whereQuery

列是(id,col1,col2,col3)

|-------------------------------------------------------|
|id      |      col1     |      col2     |      col3    |
|-------------------------------------------------------|
|"1"     |   "col1vr1"   |   "col2vr1"   |   "col3vr1"  |
|"1"     |   "col1vr2"   |   "col2vr2"   |   "col3vr2"  |
|-------------------------------------------------------|

对于上述数据,我必须生成如下的where子句,

( col("col1")<=>col1vr1 &&  col("col2")<=>col2vr1 && col("col3") <=> col3vr1 ) ||  ( col("col1")<=>col1vr2 &&  col("col2")<=>col2vr2 && col("col3") <=> col3vr2 )

以便我可以将以上查询应用于条件when( finalColumn, "We don't have any records for this rule" ) //在此处生成finalColumn查询

我尝试如下

case class test(id: String, col1: String, col2: String, col3: String)

测试数据:

 val testmap = List(
 test("1", "col1v", "col2va", "col3va"),
 test("1", "col1v", "col2va", "col3vb"),
 test("1", "col1va", "col2va", "col3vc"),
 test("1", "col1va", "col2va", "col3vd"),
  test("1", "col1vb", "col2vb", "col3vd"),
  test("1", "col1vb", "col2vb", "col3ve"),
  test("1", "col1vb", "col2va", "col3vd"),
  test("1", "col1vb", "col2va", "col3vf"),
  test("1", "col1vc", "col2vb", "col3vf"),
  test("1", "col1vc", "col2vc", "col3vf"),
  test("2", "col1v", "col2va", "col3va"),
  test("2", "col1v", "col2va", "col3vb"),
  test("2", "col1vb", "col2vb", "col3ve"),
  test("2", "col1vb", "col2vb", "col3vd"),
  test("2", "col1vc", "col2vc", "col3vf"),
  test("3", "col1va", "col2va", "col3va"),
  test("3", "col1vb", "col2vb", "col3vb"),
  test("3", "col1vc", "col2vc", "col3vc") )

代码片段:

var whereCond = scala.collection.mutable.ArrayBuffer[Column]()
  val t1 = testmap.filter( p => p.id.equalsIgnoreCase("1") )    //This will call by iteration, we need rule per iteration
  t1.map( rule =>  {
   if ( ! (  rule.col1.equalsIgnoreCase("all") ) )  {
      whereCond.+=(col("col1")<=>rule.col1 + " && ") 
   if ( ! ( rule.col2.equalsIgnoreCase("all") ) )  {
      whereCond.+=(col("col2")<=>rule.col2 + " && ")
    }
   if ( !( rule.col3.equalsIgnoreCase("all") ) ) {
      whereCond.+=(col("col3")<=>rule.col3 + "  || ")
    }
  }
 })
 var finalColumn = col("")
 whereCond.toArray[Column].map(c => { finalColumn.+=(c) } )
 finalColumn    

但未获得预期结果

而且,我也尝试了以下代码段

  var columnData =  col("")
  val df = testmap.toDF.where($"id"<=>"3").distinct
  val col1List = df.select("col1").rdd.map(r=>       r.getString(0)).collect().toList
  val col2List = df.select("col2").rdd.map(r=> r.getString(0)).collect().toList
  val col3List = df.select("col3").rdd.map(r=> r.getString(0)).collect().toList
  for( i <- 0 to col1List.size - 1 )
    if ( columnData  == col("")) 
        columnData  =  col("col1")<=>col1List(i)  && col("col2")<=>col2List(i) &&  col("col3") <=>col3List(i) 
      else
        columnData  = columnData  || (col("col1")<=>col1List(i)  && col("col2")<=>col2List(i) &&  col("col3") <=>col3List(i)  )
  columnData

无论何时我们&&或|| col scala上的操作会自动为它们创建括号

对于上面的代码,我得到的输出如下

    (((((col1 <=> col1vc) AND (col2 <=> col2vc)) AND (col3 <=> col3vc)) 
    OR (((col1 <=> col1va) AND (col2 <=> col2va)) AND (col3 <=> col3va))) 
    OR (((col1 <=> col1vb) AND (col2 <=> col2vb)) AND (col3 <=> col3vb))) 

但是我期望输出为

    col1 <=> col1vc AND col2 <=> col2vc AND col3 <=> col3vc 
    OR (col1 <=> col1va AND col2 <=> col2va AND col3 <=> col3va )
    OR (col1 <=> col1vb AND col2 <=> col2vb AND col3 <=> col3vb )

1 个答案:

答案 0 :(得分:2)

  

无论何时我们&&或||对col scala的操作会自动为它们创建括号

那不是Scala。这是普通的SQL运算符优先级,其中({quoting an answercharles-bretana):

  

并且优先于或,因此,即使<=> a1或a2

如果不希望出现这种情况,则应在表达式中加上括号

scala> import org.apache.spark.sql.functions.col
import org.apache.spark.sql.functions.col

scala> val col1 = col("col1")
col1: org.apache.spark.sql.Column = col1

scala> val col2 = col("col2")
col2: org.apache.spark.sql.Column = col2

scala> val col3 = col("col3")
col3: org.apache.spark.sql.Column = col3

scala> (col1 <=> "col1vc" and col2 <=> "col1vc")
res0: org.apache.spark.sql.Column = ((col1 <=> col1vc) AND (col2 <=> col1vc))

scala> col1 <=> "col1vc" and col2 <=> "col1vc" and col3 <=> "col3vc"
res1: org.apache.spark.sql.Column = (((col1 <=> col1vc) AND (col2 <=> col1vc)) AND (col3 <=> col3vc))

scala> col1 <=> "col1vc" and col2 <=> "col1vc" and (col3 <=> "col3vc" or (col1 <=> "col1va" and col2 <=> "col2va" and col3 <=> "col3va"))
res2: org.apache.spark.sql.Column = (((col1 <=> col1vc) AND (col2 <=> col1vc)) AND ((col3 <=> col3vc) OR (((col1 <=> col1va) AND (col2 <=> col2va)) AND (col3 <=> col3va))))