错误TableInputFormat:org.Apache.Hadoop.hbase.TableName.valueOf中的Java.lang.NullPointerException

时间:2015-11-04 05:05:10

标签: hadoop apache-spark hbase apache-zookeeper hortonworks-data-platform

我正在尝试使用Spark从HBase读取数据。我正在使用的版本是  Spark 1.3.1和Hbase 1.1.1。我收到了以下错误

ERROR TableInputFormat: java.lang.NullPointerException                                                              
    at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:417)                                                              
    at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:159)                                                              
    at org.apache.hadoop.hbase.mapreduce.TableInputFormat.setConf(TableInputFormat.java:101)                                      
    at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:91)                                                     
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)                                                        
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)                                                        
    at scala.Option.getOrElse(Option.scala:120)                                                                                   
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)                                                                         
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)                                             
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)                                                        
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)                                                        
    at scala.Option.getOrElse(Option.scala:120)                                                                                   
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)                                                                         
    at org.apache.spark.ShuffleDependency.<init>(Dependency.scala:82)                                                             
    at org.apache.spark.rdd.ShuffledRDD.getDependencies(ShuffledRDD.scala:80)                                                     
    at org.apache.spark.rdd.RDD$$anonfun$dependencies$2.apply(RDD.scala:206)                                                      
    at org.apache.spark.rdd.RDD$$anonfun$dependencies$2.apply(RDD.scala:204)                                                      
    at scala.Option.getOrElse(Option.scala:120)                                                                                   
    at org.apache.spark.rdd.RDD.dependencies(RDD.scala:204)                                                                       
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal(DAGScheduler.scal

代码如下

 public static void main( String[] args )
{
    String TABLE_NAME = "Hello";
    HTable table=null;
    SparkConf sparkConf = new SparkConf();
    sparkConf.setAppName("Data Reader").setMaster("local[1]");
    sparkConf.set("spark.executor.extraClassPath", "$(hbase classpath)");

    JavaSparkContext sparkContext = new JavaSparkContext(sparkConf);

    Configuration hbConf = HBaseConfiguration.create();
    hbConf.set("zookeeper.znode.parent", "/hbase-unsecure");
    try {
         table = new HTable(hbConf, Bytes.toBytes(TABLE_NAME));

    } catch (IOException e) {

        e.printStackTrace();
    }

    JavaPairRDD<ImmutableBytesWritable, Result> hBaseRDD = sparkContext
            .newAPIHadoopRDD(
                    hbConf,
                    TableInputFormat.class,
                    org.apache.hadoop.hbase.io.ImmutableBytesWritable.class,
                    org.apache.hadoop.hbase.client.Result.class);
    hBaseRDD.coalesce(1, true);
    System.out.println("Count "+hBaseRDD.count());
    //.saveAsTextFile("hBaseRDD");
    try {
        table.close();
        sparkContext.close();
    } catch (IOException e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    }
}

我无法解决问题。我正在使用Hortonworks Sandbox。

1 个答案:

答案 0 :(得分:2)

你写了:

try {
     table = new HTable(hbConf, Bytes.toBytes(TABLE_NAME));

} catch (IOException e) {

     e.printStackTrace();
}

如果您使用1.1.1 api:

devapidocs我只能看到两个构造函数:

  

受保护 HTable(ClusterConnection conn,BufferedMutatorParams params)   用于内部测试。

     

protected HTable(TableName tableName,ClusterConnection连接,   TableConfiguration tableConfig,RpcRetryingCallerFactory   rpcCallerFactory,RpcControllerFactory rpcControllerFactory,   ExecutorService池)创建一个对象来访问HBase表。

第一个构造函数的params的构造函数是:BufferedMutatorParams(TableName tableName)

和TableName没有构造函数。

所以你必须像这样初始化你的HTable:

table = new HTable(hbConf, new bufferedMutatorParams(TableName.valueOf(TABLE_NAME))

如果您使用0.94 API

HTBale的构造函数是:

  

HTable(byte [] tableName,HConnection connection)创建一个对象   访问HBase表。 HTable(byte [] tableName,HConnection连接,   ExecutorService pool)创建一个对象来访问HBase表。

     

HTable(org.apache.hadoop.conf.Configuration conf,byte [] tableName)   创建一个对象来访问HBase表。

     

HTable(org.apache.hadoop.conf.Configuration conf,byte [] tableName,   ExecutorService池)创建一个对象来访问HBase表。

     

HTable(org.apache.hadoop.conf.Configuration conf,String tableName)   创建一个对象来访问HBase表。

所以,看看最后一个,你只需要传递String名称而不是bytes []

table = new HTable(hbConf, TABLE_NAME);

应该没问题。