如何进行外连接:Spark Scala SQLContext

时间:2016-07-01 04:57:42

标签: scala apache-spark pyspark apache-spark-sql user-defined-functions

我正在尝试获取Total(所有数量)和Top Elements(过滤后计数),以便我可以在所有jsons(top / total)中找到每个placeName的百分位数,并且评分为> 3:

  // sc : An existing SparkContext.
    val sqlContext = new org.apache.spark.sql.SQLContext(sc)
    val df = sqlContext.jsonFile("temp.txt")
    //df.show()


    val res =  df.withColumn("visited", explode($"visited"))

    val result = res.groupBy($"customerId", $"visited.placeName")

Tried with joins :
val result1 =res.groupBy($"customerId", $"visited.placeName").agg(count("*").alias("total"))

val result2 = res
.filter($"visited.rating" < 4)
  .groupBy($"requestId", $"visited.placeName")  
  .agg(count("*").alias("top"))

result1.show()

result2.show()
percentile = result1.join(result2, List("placeName","customerId"), "outer")
 sqlContext.sql("select top/total as percentile from temp groupBy placeName") 

但是给了我错误。

我可以在udf中执行此操作:

 val result1 =  result.withColumn("Top", getCount(res , true))
                    .withColumn("Total",getCount(result, false)).show()


    def getCount(df: DataFrame, flag: Boolean): Int {
            if (flag == "true") return df.filter($"visited.rating" < 3).groupBy($"customerId", $"visited.placeName").agg(count("*"))
            else return  df.agg(count("*"))
          }

我的架构:

 {
        "country": "France",
        "customerId": "France001",
        "visited": [
            {
                "placeName": "US",
                "rating": "2",
                "famousRest": "N/A",
                "placeId": "AVBS34"

            },
              {
                "placeName": "US",
                "rating": "3",
                "famousRest": "SeriousPie",
                "placeId": "VBSs34"

            },
              {
                "placeName": "Canada",
                "rating": "3",
                "famousRest": "TimHortons",
                "placeId": "AVBv4d"

            }        
    ]
}

US top = 1 count = 3
Canada top = 1 count = 3


{
        "country": "Canada",
        "customerId": "Canada012",
        "visited": [
            {
                "placeName": "UK",
                "rating": "3",
                "famousRest": "N/A",
                "placeId": "XSdce2"

            },


    ]
}
UK top = 1 count = 1


{
        "country": "France",
        "customerId": "France001",
        "visited": [
            {
                "placeName": "US",
                "rating": "4.3",
                "famousRest": "N/A",
                "placeId": "AVBS34"

            },
              {
                "placeName": "US",
                "rating": "3.3",
                "famousRest": "SeriousPie",
                "placeId": "VBSs34"

            },
              {
                "placeName": "Canada",
                "rating": "4.3",
                "famousRest": "TimHortons",
                "placeId": "AVBv4d"

            }        
    ]
}

US top = 2 count = 3
Canada top = 1 count = 3

所以最后我需要这样的东西:

PlaceName  percentile
US         57.14            (1+1+2)/(3+1+3) *100
Canada     33.33            (1+1)/(3+3) *100
UK         100               1*100

架构:

root
|-- country: string(nullable=true)
|-- customerId:string(nullable=true)
|-- visited: array (nullable = true)
|    |-- element: struct (containsNull = true)
|    |   |-- placeId: string (nullable = true)
|    |   |-- placeName: string (nullable = true) 
|    |   |-- famousRest: string (nullable = true)
|    |   |-- rating: string (nullable = true)

1 个答案:

答案 0 :(得分:2)

鉴于您提供的代码,不清楚源是如何构造的以及为什么会出现此特定错误,但通常此代码甚至无法远程有效。

  • getCount不是UDF - 不是关键但重要的区别。
  • getCount不是有效函数,因为范围中没有col类型。除非您出于某种原因使用此作为o.a.s.sql.DataFrame的类型别名,否则这甚至无法编译!
  • 即使匹配的类型,Spark也不支持嵌套操作/转换,因此您无法使用UDF在Spark DataFrame上执行查询或聚合。