我正在使用pyspark [spark2.3.1]和Hbase1.2.1,我想知道使用pyspark访问Hbase的最佳方式是什么吗?
我进行了一些初始搜索,发现几乎没有可用的选项,例如使用shc-core:1.1.1-2.1-s_2.11.jar,但这可以实现,但是无论我在哪里寻找示例,在大多数地方,代码是用Scala编写的,或者示例也是基于Scala的。我尝试在pyspark中实现基本代码:
from pyspark import SparkContext
from pyspark.sql import SQLContext
def main():
sc = SparkContext()
sqlc = SQLContext(sc)
data_source_format = 'org.apache.spark.sql.execution.datasources.hbase'
catalog = ''.join("""{
"table":{"namespace":"default", "name":"firsttable"},
"rowkey":"key",
"columns":{
"firstcol":{"cf":"rowkey", "col":"key", "type":"string"},
"secondcol":{"cf":"d", "col":"colname", "type":"string"}
}
}""".split())
df = sqlc.read.options(catalog=catalog).format(data_source_format).load()
df.select("secondcol").show()
# entry point for PySpark application
if __name__ == '__main__':
main()
并使用以下命令运行它:
spark-submit --master yarn-client --files /opt/hbase-1.1.2/conf/hbase-site.xml --packages com.hortonworks:shc-core:1.1.1-2.1-s_2.11 --jars /home/ubuntu/hbase-spark-2.0.0-alpha4.jar HbaseMain2.py
返回空白输出:
+---------+
|secondcol|
+---------+
+---------+
我不确定我在做什么错?还不确定这样做的最佳方法是什么?
任何参考文献将不胜感激。
致谢
答案 0 :(得分:2)
最后,使用 SHC ,我可以使用pyspark代码使用Spark-2.3.1连接到HBase-1.2.1。以下是我的工作:
我所有的hadoop [namenode,datanode,nodemanager,resourcemanager]和hbase [Hmaster,HRegionServer,HQuorumPeer]守护进程均已启动并在我的EC2实例上运行。
我将emp.csv文件放置在hdfs位置/test/emp.csv中,其中包含数据:
UserProfile data = new UserProfile(){
firstname = model.firstname,
licenseName = configFields.licenseName
};
- 我使用以下代码行创建了 readwriteHBase.py 文件[用于从HDFS读取emp.csv文件,然后首先在HBase中创建tblEmployee,将数据推入tblEmployee,然后再次从中读取一些数据相同的表并将其显示在控制台上]:
key,empId,empName,empWeight
1,"E007","Bhupesh",115.10
2,"E008","Chauhan",110.23
3,"E009",Prithvi,90.0
4,"E0010","Raj",80.0
5,"E0011","Chauhan",100.0
- 使用以下命令在VM控制台上运行此脚本:
from pyspark.sql import SparkSession
def main():
spark = SparkSession.builder.master("yarn-client").appName("HelloSpark").getOrCreate()
dataSourceFormat = "org.apache.spark.sql.execution.datasources.hbase"
writeCatalog = ''.join("""{
"table":{"namespace":"default", "name":"tblEmployee", "tableCoder":"PrimitiveType"},
"rowkey":"key",
"columns":{
"key":{"cf":"rowkey", "col":"key", "type":"int"},
"empId":{"cf":"personal","col":"empId","type":"string"},
"empName":{"cf":"personal", "col":"empName", "type":"string"},
"empWeight":{"cf":"personal", "col":"empWeight", "type":"double"}
}
}""".split())
writeDF = spark.read.format("csv").option("header", "true").option("inferSchema", "true").load("/test/emp.csv")
print("csv file read", writeDF.show())
writeDF.write.options(catalog=writeCatalog, newtable=5).format(dataSourceFormat).save()
print("csv file written to HBase")
readCatalog = ''.join("""{
"table":{"namespace":"default", "name":"tblEmployee"},
"rowkey":"key",
"columns":{
"key":{"cf":"rowkey", "col":"key", "type":"int"},
"empId":{"cf":"personal","col":"empId","type":"string"},
"empName":{"cf":"personal", "col":"empName", "type":"string"}
}
}""".split())
print("going to read data from Hbase table")
readDF = spark.read.options(catalog=readCatalog).format(dataSourceFormat).load()
print("data read from HBase table")
readDF.select("empId", "empName").show()
readDF.show()
# entry point for PySpark application
if __name__ == '__main__':
main()
- 中间结果:读取CSV文件后:
spark-submit --master yarn-client --packages com.hortonworks:shc-core:1.1.1-2.1-s_2.11 --repositories http://nexus-private.hortonworks.com/nexus/content/repositories/IN-QA/ readwriteHBase.py
- 最终输出:从HBase表读取数据后:
+---+-----+-------+---------+
|key|empId|empName|empWeight|
+---+-----+-------+---------+
| 1| E007|Bhupesh| 115.1|
| 2| E008|Chauhan| 110.23|
| 3| E009|Prithvi| 90.0|
| 4|E0010| Raj| 80.0|
| 5|E0011|Chauhan| 100.0|
+---+-----+-------+---------+
注意:在创建Hbase表并将数据插入HBase表时,它期望NumberOfRegions应该大于3,因此我在向HBase添加数据时添加了+-----+-------+
|empId|empName|
+-----+-------+
| E007|Bhupesh|
| E008|Chauhan|
| E009|Prithvi|
|E0010| Raj|
|E0011|Chauhan|
+-----+-------+