MSCK无法通过Spark SQL

时间:2017-03-13 09:58:53

标签: apache-spark hive apache-spark-sql

我创建了hive上下文对象,并尝试执行msck命令,该命令将分区添加到hive表中,但是它给出了以下异常

Exception in thread "main" org.apache.spark.sql.catalyst.parser.ParseException:
Operation not allowed: msck repair table(line 1, pos 0)

== SQL ==
msck repair table table_name
^^^

        at org.apache.spark.sql.catalyst.parser.ParserUtils$.operationNotAllowed(ParserUtils.scala:43)
        at org.apache.spark.sql.execution.SparkSqlAstBuilder$$anonfun$visitFailNativeCommand$1.apply(SparkSqlParser.scala:837)
        at org.apache.spark.sql.execution.SparkSqlAstBuilder$$anonfun$visitFailNativeCommand$1.apply(SparkSqlParser.scala:828)
        at org.apache.spark.sql.catalyst.parser.ParserUtils$.withOrigin(ParserUtils.scala:96)
        at org.apache.spark.sql.execution.SparkSqlAstBuilder.visitFailNativeCommand(SparkSqlParser.scala:828)
        at org.apache.spark.sql.execution.SparkSqlAstBuilder.visitFailNativeCommand(SparkSqlParser.scala:53)
        at org.apache.spark.sql.catalyst.parser.SqlBaseParser$FailNativeCommandContext.accept(SqlBaseParser.java:900)
        at org.antlr.v4.runtime.tree.AbstractParseTreeVisitor.visit(AbstractParseTreeVisitor.java:42)
        at org.apache.spark.sql.catalyst.parser.AstBuilder$$anonfun$visitSingleStatement$1.apply(AstBuilder.scala:64)
        at org.apache.spark.sql.catalyst.parser.AstBuilder$$anonfun$visitSingleStatement$1.apply(AstBuilder.scala:64)
        at org.apache.spark.sql.catalyst.parser.ParserUtils$.withOrigin(ParserUtils.scala:96)
        at org.apache.spark.sql.catalyst.parser.AstBuilder.visitSingleStatement(AstBuilder.scala:63)
        at org.apache.spark.sql.catalyst.parser.AbstractSqlParser$$anonfun$parsePlan$1.apply(ParseDriver.scala:54)
        at org.apache.spark.sql.catalyst.parser.AbstractSqlParser$$anonfun$parsePlan$1.apply(ParseDriver.scala:53)
        at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:82)
        at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:46)
        at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:53)
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582)
        at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:682)
        at com.mcd.spark.driver.R2D2Driver$.main(R2D2Driver.scala:321)
        at com.mcd.spark.driver.R2D2Driver.main(R2D2Driver.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:729)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

创建了火花上下文和hive上下文,如下所述。

 val conf = new SparkConf().setAppName(appName).setMaster(master)
    var sc: SparkContext = null
    sc = new SparkContext(conf)
     val hqlContext = new org.apache.spark.sql.hive.HiveContext(sc)

hqlContext.sql("msck repair table table_name")

Can some one help me to solve how to add partitions into hive table?

Regards,
Aswin

2 个答案:

答案 0 :(得分:0)

尝试使用"runSqlHive",如:

hqlContext.runSqlHive("msck repair table table_name")

OR

try {
      Class.forName("org.apache.hive.jdbc.HiveDriver");
      Connecton connection = DriverManager.getConnection("jdbc:hive2://<hostname>:<port>/<db_name>" , <user_name> , "");

      Statement stmt = connection.createStatement();
      stmt.execute("msck repair table table_name");
      } catch (final ClassNotFoundException | SQLException e {
            throw new RuntimeException(e);
      }

答案 1 :(得分:0)

我想我之前也遇到过这个问题。 MSCK最近加入了对MSCK的支持,支持快速统计数据。如果备用语法适合您,您可以尝试吗?

ALTER TABLE {table_name} RECOVER PARTITIONS;

此外,您可能需要查看https://issues.apache.org/jira/browse/SPARK-20697以防止副作用。