带管道问题的spark sql查询

时间:2017-04-03 23:05:44

标签: apache-spark apache-spark-sql spark-dataframe

我有如下查询:

select distinct 
  'sandbox_' || mp.email as "course_id",
  'Sandbox - ' || mp.first || ' ' || mp.last  as "short_name",
  'Sandbox - ' || mp.first || ' ' || mp.last  as "long_name",
  'instructor_sandboxes' as "account_id",
  'Sandbox' as "term_id",
  'active' as "status",
    null as "start_date",
    null as "end_date"
 from 
  table1 mp, 
  table2 bi,
  table3 term
 where user_type in ('A','B','C','D','E')
    and kerberos_name not like '%/%'
    and kerberos_name != '@staff@'
    and mp.banner_pidm=bi.pidm
    and term.startdate >= sysdate - 365 
    and term.enddate <= sysdate +365/2
    and term.term_code=bi.term_code
  ORDER BY "course_id"

我正在启动时从文本文件中读取此查询。我将所有行分隔符替换为:

sql.replace(System.lineSeparator(),“”);

然后我使用以下内容:

DataFrame df = sqlContext.read().format("jdbc").options(db_properties).load();

其中db_properties包含一个名为“dbtable”的字段,其中'(sql_statement)'为值。

使用上述内容查询Oracle数据库时,我一直收到以下错误:

Root Exception stack trace:
java.lang.RuntimeException: [3.15] failure: identifier expected

'sandbox_' || mp.email as "course_id",

          ^
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:36)
at org.apache.spark.sql.catalyst.DefaultParserDialect.parse(ParserDialect.scala:67)
at org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:211)
at org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:211)
at org.apache.spark.sql.execution.SparkSQLParser$$anonfun$org$apache$spark$sql$execution$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:114)
at org.apache.spark.sql.execution.SparkSQLParser$$anonfun$org$apache$spark$sql$execution$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:113)
at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:137)
at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:237)
at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:237)
at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:217)
at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:249)
at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:249)
at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:197)
at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:249)
at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:249)
at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:217)
at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:882)
at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:882)

0 个答案:

没有答案