org.apache.spark.sql.AnalysisException:给定输入列无法解析'`angelique`'

时间:2017-04-26 08:21:16

标签: scala apache-spark-sql

我有一个使用数据框创建的表,我尝试启动查询,如下所示:

17/04/26 10:11:01 INFO SparkSqlParser: Parsing command: users
17/04/26 10:11:01 INFO SparkSqlParser: Parsing command: SELECT login FROM global_temp.users where login=angelique
Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot resolve '`angelique`' given input columns: [idExterne, login, password, uid]; line 1 pos 48;
'Project ['login]
+- 'Filter (login#60 = 'angelique)
   +- SubqueryAlias users, `global_temp`.`users`
      +- Project [_1#50 AS idExterne#59, _2#51 AS login#60, _3#52 AS password#61, _4#53 AS uid#62]

输出错误:

<input type="hidden" name="required" value="name,town,tel,email">

3 个答案:

答案 0 :(得分:0)

替换此代码

sc.sql(s"SELECT login FROM global_temp.users where login='$login'").show

它会起作用

答案 1 :(得分:0)

据我所知,Spark 2.0.1中的列名中不允许使用点。我不确定在DF名称中是否允许它。也许您可以尝试省略或替换DF名称中的点?

答案 2 :(得分:0)

试试这个

sc.sql(s“SELECT login FROM global_temp.users where login ==='$ login'”)。show