HBase·PySpark表加载错误

时间:2016-07-27 09:39:01

标签: apache-spark hbase pyspark

我尝试在PySpark上读取HBase的表格。

这是我的代码。

from pyspark.sql.types import *    
host = 'localhost'

keyConv = "org.apache.spark.examples.pythonconverters.ImmutableBytesWritableToStringConverter"

valueConv = "org.apache.spark.examples.pythonconverters.HBaseResultToStringConverter"

testdata_conf = {"hbase.zookeeper.quorum": host, "hbase.mapreduce.inputtable": "test", "hbase.mapreduce.scan.columns": "cf:a"}

testdata_rdd = sc.newAPIHadoopRDD("org.apache.hadoop.hbase.mapreduce.TableInputFormat","org.apache.hadoop.hbase.io.ImmutableBytesWritable","org.apache.hadoop.hbase.client.Result",keyConverter=keyConv,valueConverter=valueConv,conf=testdata_conf)

output = cmdata_rdd.collect()
output

但是,我收到了错误。

An error occurred while calling z:org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD.
: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.io.ImmutableBytesWritable

我引用此链接RDD is having only first column value : Hbase, PySpark来表格加载 我从不使用Java和Scala。所以,我无法理解为什么会发生错误。

如果有人提出建议,请告诉我 谢谢。

2 个答案:

答案 0 :(得分:0)

因为我刚开始使用(py)Spak,我能感觉到你。 您必须使用--jars选项将jar添加到pySpark命令中。 这是一个例子(我们正在使用Cloudera,所以请确保罐子在你可以到达的地方):

pyspark --jars /opt/cloudera/parcels/CDH/jars/spark-examples-1.6.0-cdh5.10.0-hadoop2.6.0-cdh5.10.0.jar,/opt/cloudera/parcels/CDH/jars/hbase-examples-1.2.0-cdh5.10.0.jar

您的异常是“ClassNotFoundException”,这意味着jar不在您的类路径中。

要检查您的其他jar是否正确附加到类路径,请查看pySpark启动时看到的信息消息。必须有这样的信息:

  

INFO ui.SparkUI:在http://xx.xx.xx.xx:4040启动SparkUI   其中x是群集的IP。

打开浏览器并导航到此地址。 您将看到群集状态。单击Enviroment并浏览到页面的最后。您将看到您的罐子后面跟着“由用户添加”

这是我的脚本,对我没有任何问题:     来自pyspark导入SparkContext,HiveContext     来自pyspark.streaming import StreamingContext

def main():
    # The SparkContext might be initialized by the spark Shell
    sc = SparkContext("local[2]", appName='SparkHBaseWriter')
    # Config to write to a hBaseFile
    conf = {"hbase.zookeeper.qourum": "quickstart.cloudera:2181",\
                "zookeeper.znode.parent": "/hbase",\
                "hbase.mapred.outputtable": "test",\
                "mapreduce.outputformat.class": "org.apache.hadoop.hbase.mapreduce.TableOutputFormat",\
                "mapreduce.job.output.key.class": "org.apache.hadoop.hbase.io.ImmutableBytesWritable",\
                "mapreduce.job.output.value.class": "org.apache.hadoop.io.Writable"}
    keyConv = "org.apache.spark.examples.pythonconverters.StringToImmutableBytesWritableConverter"
    valueConv = "org.apache.spark.examples.pythonconverters.StringListToPutConverter"
    rdd = sc.parallelize((("row1", ["row1", "cf1", "cel1", "value from PySpark"]),))
    rdd.saveAsNewAPIHadoopDataset(conf=conf,keyConverter=keyConv,valueConverter=valueConv)

    # read data from hBase
    conf = {"hbase.zookeeper.qourum": "sbglboclomd0002.santanderde.dev.corp:2181",\
                "zookeeper.znode.parent": "/hbase",\
                "hbase.mapred.outputtable": "test",\
                "mapreduce.outputformat.class": "org.apache.hadoop.hbase.mapreduce.TableOutputFormat",\
                "mapreduce.job.output.key.class": "org.apache.hadoop.hbase.io.ImmutableBytesWritable",\
                "mapreduce.job.output.value.class": "org.apache.hadoop.io.Writable"}

    host = 'localhost'

    keyConv = "org.apache.spark.examples.pythonconverters.ImmutableBytesWritableToStringConverter"

    valueConv = "org.apache.spark.examples.pythonconverters.HBaseResultToStringConverter"  

    testdata_conf = {
        "hbase.zookeeper.quorum": "quickstart.cloudera:2181", 
        "hbase.mapreduce.inputtable": "test", 
        "hbase.mapreduce.scan.columns": "cf1:cel1"
        }

    testdata_rdd = sc.newAPIHadoopRDD(
        "org.apache.hadoop.hbase.mapreduce.TableInputFormat",
        "org.apache.hadoop.hbase.io.ImmutableBytesWritable",
        "org.apache.hadoop.hbase.client.Result",
        keyConverter=keyConv,
        valueConverter=valueConv,
        conf=testdata_conf)

    output = testdata_rdd.collect()
    print(output[0])
    # Output in console: 
    # >>> testdata_rdd.collect()[0]
    # (u'row1', u'{"qualifier" : "cel1", "timestamp" : "1499151230221", "columnFamily" : "cf1", "row" : "row1", "type" : "Put", "value" : "value from PySpark"}')


if __name__ == '__main__':
    main()

我知道你的问题很老,但我希望这会帮助别人。

答案 1 :(得分:0)

首先,您应该添加所有的Hbase jar到spark lib。如果您还有这个问题。可能还需要添加此1.6.0-typesafe-001 jars

似乎spark(py)无法将HBase的数据传输到Python的数据。

因此,您需要将此jar添加到spark-defaults.conf中的spark库或路径中。

看起来像这样:

spark.executor.extraClassPath=/home/guszhang/app/spark-2.2.0-bin-hadoop2.6/exteral_jars/HABSE_JARS/*
spark.driver.extraClassPath=/home/guszhang/app/spark-2.2.0-bin-hadoop2.6/exteral_jars/HABSE_JARS/*