在Accumulo表上运行mapreduce作业时出现TApplicationException异常

时间:2016-01-18 02:24:26

标签: java hadoop mapreduce accumulo

我正在运行一个map reduce job从Accumulo中的表中获取数据作为输入并将结果存储在Accumulo中的另一个表中。为此,我使用的是AccumuloInputFormat和AccumuloOutputFormat类。这是代码

public int run(String[] args) throws Exception {

        Opts opts = new Opts();
        opts.parseArgs(PivotTable.class.getName(), args);

        Configuration conf = getConf();

        conf.set("formula", opts.formula);

        Job job = Job.getInstance(conf);

        job.setJobName("Pivot Table Generation");
        job.setJarByClass(PivotTable.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);

        job.setMapperClass(PivotTableMapper.class);
        job.setCombinerClass(PivotTableCombiber.class);
        job.setReducerClass(PivotTableReducer.class);

        job.setInputFormatClass(AccumuloInputFormat.class);

        ClientConfiguration zkConfig = new ClientConfiguration().withInstance(opts.getInstance().getInstanceName()).withZkHosts(opts.getInstance().getZooKeepers());

        AccumuloInputFormat.setInputTableName(job, opts.dataTable);
        AccumuloInputFormat.setZooKeeperInstance(job, zkConfig);
        AccumuloInputFormat.setConnectorInfo(job, opts.getPrincipal(), new PasswordToken(opts.getPassword().value));

        job.setOutputFormatClass(AccumuloOutputFormat.class);

        BatchWriterConfig bwConfig = new BatchWriterConfig();

        AccumuloOutputFormat.setBatchWriterOptions(job, bwConfig);
        AccumuloOutputFormat.setZooKeeperInstance(job, zkConfig);
        AccumuloOutputFormat.setConnectorInfo(job, opts.getPrincipal(), new PasswordToken(opts.getPassword().value));
        AccumuloOutputFormat.setDefaultTableName(job, opts.pivotTable);
        AccumuloOutputFormat.setCreateTables(job, true);

        return job.waitForCompletion(true) ? 0 : 1;
    }

PivotTable是包含main方法的类的名称(也是这个方法)。我也制作了mapper,combiner和reducer类。但是,当我尝试执行此工作时,我收到错误

Exception in thread "main" java.io.IOException: org.apache.accumulo.core.client.AccumuloException: org.apache.thrift.TApplicationException: Internal error processing hasTablePermission
        at org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.validatePermissions(InputConfigurator.java:707)
        at org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.validateOptions(AbstractInputFormat.java:397)
        at org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.getSplits(AbstractInputFormat.java:668)
        at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
        at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
        at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
        at com.latize.ulysses.accumulo.postprocess.PivotTable.run(PivotTable.java:247)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at com.latize.ulysses.accumulo.postprocess.PivotTable.main(PivotTable.java:251)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.accumulo.core.client.AccumuloException: org.apache.thrift.TApplicationException: Internal error processing hasTablePermission
        at org.apache.accumulo.core.client.impl.SecurityOperationsImpl.execute(SecurityOperationsImpl.java:87)
        at org.apache.accumulo.core.client.impl.SecurityOperationsImpl.hasTablePermission(SecurityOperationsImpl.java:220)
        at org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.validatePermissions(InputConfigurator.java:692)
        ... 21 more
Caused by: org.apache.thrift.TApplicationException: Internal error processing hasTablePermission
        at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
        at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
        at org.apache.accumulo.core.client.impl.thrift.ClientService$Client.recv_hasTablePermission(ClientService.java:641)
        at org.apache.accumulo.core.client.impl.thrift.ClientService$Client.hasTablePermission(ClientService.java:624)
        at org.apache.accumulo.core.client.impl.SecurityOperationsImpl$8.execute(SecurityOperationsImpl.java:223)
        at org.apache.accumulo.core.client.impl.SecurityOperationsImpl$8.execute(SecurityOperationsImpl.java:220)
        at org.apache.accumulo.core.client.impl.ServerClient.executeRaw(ServerClient.java:79)
        at org.apache.accumulo.core.client.impl.SecurityOperationsImpl.execute(SecurityOperationsImpl.java:73)

有人能告诉我这里做错了什么吗?任何帮助将不胜感激。

编辑:我正在运行Accumulo 1.7.0

1 个答案:

答案 0 :(得分:1)

TApplicationException表示在Accumulo数位板服务器上发生错误,而不是在客户端(MapReduce)代码中发生错误。您需要检查平板电脑服务器日志,以便在TApplicationException {。}}的任何位置获取有关特定错误的详细信息。

通常从ZooKeeper检索表权限,因此它可能表示连接到ZooKeeper的服务器存在问题。

不幸的是,我没有在堆栈跟踪中看到主机名或IP,因此您可能必须检查所有tserver日志才能找到它。