如何在Spark上进行SQL联合?

时间:2019-03-29 22:39:05

标签: sql apache-spark pyspark pyspark-sql

我想在spark中的两个表之间进行SQL联接,但是出现了意外错误:

>>> cyclistes.printSchema()
root
 |-- id: string (nullable = true)
 |-- age: string (nullable = true)
(...)
>>> voyages.printSchema()
root
 |-- id: string (nullable = true)
 |-- vitesse: string (nullable = true)
 (...)
>>> requete_sql = """
SELECT c.id, c.age, mean(v.vitesse)
FROM   cyclistes as c , voyages as v
WHERE c.id == v.id
GROUP BY c.id
"""
>>> spark.sql(requete_sql)


   AnalysisException: "grouping expressions sequence is empty, and 
'c.`age`' is not an aggregate function. Wrap '(avg(CAST(v.`vitesse` 
AS DOUBLE)) AS `avg(CAST(vitesse AS DOUBLE))`)' in windowing 
function(s) or wrap 'c.`age`' in first() (or first_value) if you 
don't care which value you get.;

有什么主意吗?

答案:

SQL查询中的基本错误: 应该在年龄上限附近添加

    >>> requete_sql = """ 
SELECT c.id, max(c.age), mean(v.vitesse) 
FROM  cyclistes as c , voyages as v 
WHERE c.id == v.id GROUP BY c.id """
>>> spark.sql(requete_sql)

1 个答案:

答案 0 :(得分:0)

答案:

SQL查询中的基本错误: 应该在年龄上限附近添加

    >>> requete_sql = """ 
SELECT c.id, max(c.age), mean(v.vitesse) 
FROM  cyclistes as c , voyages as v 
WHERE c.id == v.id GROUP BY c.id """
>>> spark.sql(requete_sql)