如何在pyspark中将变量传递给spark.sql查询?

时间:2018-12-30 08:00:24

标签: python pyspark apache-spark-sql

如何在pyspark中将变量传递给spark.sql查询?当我查询表时,它失败,并显示AnalysisException。为什么?

>>> spark.sql("select * from student").show()

+-------+--------+
|roll_no|    name|
+-------+--------+
|      1|ravindra|
+-------+--------+

>>> spark.sql("select * from student where roll_no={0} and name={1}".format(id,name)).show()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/spark-2.3.0-bin-hadoop2.6/python/pyspark/sql/session.py", line 767, in sql
    return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
  File "/usr/local/spark-2.3.0-bin-hadoop2.6/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
  File "/usr/local/spark-2.3.0-bin-hadoop2.6/python/pyspark/sql/utils.py", line 69, in deco
    raise AnalysisException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.AnalysisException: u"cannot resolve '`ravindra`' given input columns: [default.student.id, default.student.roll_no, default.student.name]; line 1 pos 47;\n'Project [*]\n+- 'Filter ((roll_no#21 = 0) && (name#22 = 'ravindra))\n   +- SubqueryAlias `default`.`student`\n      +- HiveTableRelation `default`.`student`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [id#20, roll_no#21, name#22]\n"

1 个答案:

答案 0 :(得分:0)

我通常在sql字符串中使用%s字符串格式化程序

sqlc.sql('select * from students where roll_no=%s and name="%s"' % ('1', 'ravindra')).show()

查看您的sql回溯,当将name=传递给sql字符串时,您肯定错过了ravindra值的引号,并且sql引擎将其视为变量调用。

然后您的sql查询变为

select * from students where roll_no=1 and name=ravindra  -- no quotes

您可以将sql字符串调整为

spark.sql("select * from student where roll_no={0} and name='{1}'".format(id,name)).show()

引用您的{1}以获得所需的结果。