Zeppelin-Spark解释器无法通过使用CTAS(将表创建为Select ...)语句来创建配置单元表

时间:2018-09-26 13:29:27

标签: apache-spark hive apache-zeppelin livy

我正在使用Zeppelin,并尝试通过使用CTAS语句从另一个配置单元表创建配置单元表

但是我的查询总是以错误结束,因此未创建表。找到了几则帖子,说要修改齐柏林飞艇的配置,但由于没有权限,我无法更改任何配置。

下面给出了我已执行的查询和得到的错误:

%sql
create table student as select * from student_score
  

org.apache.hadoop.hive.ql.metadata.HiveException:无法更改   表。无效的方法名称:“ alter_table_with_cascade”位于   org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:500)在   org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:484)在   org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1668)在   sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)位于   sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)   在   sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)   在java.lang.reflect.Method.invoke(Method.java:498)在   org.apache.spark.sql.hive.client.Shim_v0_14.loadTable(HiveShim.scala:716)   在   org.apache.spark.sql.hive.client.HiveClientImpl $$ anonfun $ loadTable $ 1.apply $ mcV $ sp(HiveClientImpl.scala:672)   在   org.apache.spark.sql.hive.client.HiveClientImpl $$ anonfun $ loadTable $ 1.apply(HiveClientImpl.scala:672)   在   org.apache.spark.sql.hive.client.HiveClientImpl $$ anonfun $ loadTable $ 1.apply(HiveClientImpl.scala:672)   在   org.apache.spark.sql.hive.client.HiveClientImpl $$ anonfun $ withHiveState $ 1.apply(HiveClientImpl.scala:283)   在   org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1 $ 1(HiveClientImpl.scala:230)   在   org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:229)   在   org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:272)   在   org.apache.spark.sql.hive.client.HiveClientImpl.loadTable(HiveClientImpl.scala:671)   在   org.apache.spark.sql.hive.HiveExternalCatalog $$ anonfun $ loadTable $ 1.apply $ mcV $ sp(HiveExternalCatalog.scala:741)   在   org.apache.spark.sql.hive.HiveExternalCatalog $$ anonfun $ loadTable $ 1.apply(HiveExternalCatalog.scala:739)   在   org.apache.spark.sql.hive.HiveExternalCatalog $$ anonfun $ loadTable $ 1.apply(HiveExternalCatalog.scala:739)   在   org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:95)   在   org.apache.spark.sql.hive.HiveExternalCatalog.loadTable(HiveExternalCatalog.scala:739)   在   org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult $ lzycompute(InsertIntoHiveTable.scala:323)   在   org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult(InsertIntoHiveTable.scala:170)   在   org.apache.spark.sql.hive.execution.InsertIntoHiveTable.doExecute(InsertIntoHiveTable.scala:347)   在   org.apache.spark.sql.execution.SparkPlan $$ anonfun $ execute $ 1.apply(SparkPlan.scala:114)   在   org.apache.spark.sql.execution.SparkPlan $$ anonfun $ execute $ 1.apply(SparkPlan.scala:114)   在   org.apache.spark.sql.execution.SparkPlan $$ anonfun $ executeQuery $ 1.apply(SparkPlan.scala:135)   在   org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:151)   在   org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)   在   org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)   在   org.apache.spark.sql.execution.QueryExecution.toRdd $ lzycompute(QueryExecution.scala:87)   在   org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)   在   org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand.run(CreateHiveTableAsSelectCommand.scala:92)   在   org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult $ lzycompute(commands.scala:58)   在   org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)   在   org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)   在   org.apache.spark.sql.execution.SparkPlan $$ anonfun $ execute $ 1.apply(SparkPlan.scala:114)   在   org.apache.spark.sql.execution.SparkPlan $$ anonfun $ execute $ 1.apply(SparkPlan.scala:114)   在   org.apache.spark.sql.execution.SparkPlan $$ anonfun $ executeQuery $ 1.apply(SparkPlan.scala:135)   在   org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:151)   在   org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)   在   org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)   在   org.apache.spark.sql.execution.QueryExecution.toRdd $ lzycompute(QueryExecution.scala:87)   在   org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)   在org.apache.spark.sql.Dataset。(Dataset.scala:185)在   org.apache.spark.sql.Dataset $ .ofRows(Dataset.scala:64)在   org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)... 47   消除原因:org.apache.thrift.TApplicationException:无效   方法名称:“ alter_table_with_cascade”位于   org.apache.thrift.TApplicationException.read(TApplicationException.java:111)   在   org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
  在   org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore $ Client.recv_alter_table_with_cascade(ThriftHiveMetastore.java:1374)   在   org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore $ Client.alter_table_with_cascade(ThriftHiveMetastore.java:1358)   在   org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_table(HiveMetaStoreClient.java:340)   在   org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.alter_table(SessionHiveMetaStoreClient.java:251)   在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处   sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)   在   sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)   在java.lang.reflect.Method.invoke(Method.java:498)在   org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156)   位于com.sun.proxy。$ Proxy25.alter_table(未知来源)   org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:496)
  ...还有93个

0 个答案:

没有答案