我正在使用AWS EMR + Spark 1.6.1 + Hive 1.0.0
我有这个UDAF并将其包含在spark https://github.com/scribd/hive-udaf-maxrow/blob/master/src/com/scribd/hive/udaf/GenericUDAFMaxRow.java
的类路径中并通过sqlContext.sql在&spark中注册它("创建临时功能maxrow AS' some.cool.package.hive.udf.GenericUDAFMaxRow'")
但是,当我在Spark中用以下查询调用它时
CREATE VIEW VIEW_1 AS
SELECT
a.A,
a.B,
maxrow ( a.C,
a.D,
a.E,
a.F,
a.G,
a.H,
a.I
) as m
FROM
table_1 a
JOIN
table_2 b
ON
b.Z = a.D
AND b.Y = a.C
JOIN dummy_table
GROUP BY
a.A,
a.B
它给了我这个错误
16/05/18 19:49:14 WARN RowResolver: Duplicate column info for a.A was overwritten in RowResolver map: _col0: string by _col0: string
16/05/18 19:49:14 WARN RowResolver: Duplicate column info for a.B was overwritten in RowResolver map: _col1: bigint by _col1: bigint
16/05/18 19:49:14 ERROR Driver: FAILED: SemanticException [Error 10002]: Line 16:32 Invalid column reference 'C'
org.apache.hadoop.hive.ql.parse.SemanticException: Line 16:32 Invalid column reference 'C'
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:10643)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:10591)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:3656)
但是如果我删除了group by子句和它运行的聚合函数。所以我怀疑SparkSQL不知道它是不是一个聚合函数。
感谢任何帮助。感谢。