使用Pyspark的Hbase过滤器

时间:2019-01-24 06:21:29

标签: filter pyspark hbase

我正在尝试通过pyspark从hbase读取和过滤数据。到目前为止,能够进行扫描,范围扫描(使用开始和停止)。但是,对于如何使用过滤器一无所知。例如valuefilter,ColumnPrefixFilter等。

使用类hbase.filter.FilterBase.filterRowKey

在谷歌上搜索时,发现之前曾有人询问过类似的问题,但未得到答复。 Spark: How to use HBase filter e.g QualiferFilter by python-api

注意:使用cloudera分发,需要通过pyspark做到这一点(听说可以通过scala / java轻松完成)

以下是代码:-

host = "host@xyz.com"
keyConv = "org.apache.spark.examples.pythonconverters.ImmutableBytesWritableToStringConverter"

valueConv = "org.apache.spark.examples.pythonconverters.HBaseResultToStringConverter"

conf_read = {"hbase.master.kerberos.principal": "hbase/_HOST@xyz.com", \
        "hbase.rpc.protection":"privacy", \
        "hadoop.security.authentication": "kerberos", \
        "hadoop.rpc.protection": "privacy", \
        "hbase.regionserver.kerberos.principal": "hbase/_HOST@xyz.com" , \
        "hadoop.security.authentication": "kerberos", \
        "hbase.security.authentication": "kerberos", \
        "hbase.zookeeper.property.clientPort": "2181", \
        "zookeeper.znode.parent": "/hbase", \
        "hbase.zookeeper.quorum": host, \
        "hbase.mapreduce.inputtable": "namespace:tablename", \
        #"hbase.mapreduce.scan.row.start": "row2", \ --this works
        #"hbase.mapreduce.scan.row.stop": "row4", \ --this works 
        "hbase.filter.FilterBase.filterRowKey":"row3"} --this does not

testdata_rdd = spark.sparkContext.newAPIHadoopRDD(
        "org.apache.hadoop.hbase.mapreduce.TableInputFormat",
        "org.apache.hadoop.hbase.io.ImmutableBytesWritable",
        "org.apache.hadoop.hbase.client.Result",
        keyConverter=keyConv,
        valueConverter=valueConv,
        conf=conf_read)

output = testdata_rdd.collect()

print output

0 个答案:

没有答案