pyspark在保存时挂起

时间:2019-01-11 12:30:31

标签: pyspark hbase

我正在尝试通过pyspark读写hbase。我正在使用来自hbase连接的shc:1.0.0-1.6-s_2.10。它总是挂在storage.BlockManagerInfo上。我曾尝试增加执行程序的内存,但也没有用。

Spark submit command:  

spark-submit \
  --verbose \
  --master yarn-client \
--packages com.hortonworks:shc:1.0.0-1.6-s_2.10 \
  --conf spark.executor.memory=4g --conf spark.executor.cores=1 \
   /home/hbaseuser/sasi/pysprk/test.py 

code:   

from pyspark import SparkContext
from pyspark.sql import SQLContext

sc = SparkContext()
sqlc = SQLContext(sc)
print "starting"
data_source_format = 'org.apache.spark.sql.execution.datasources.hbase'
sc._jsc.hadoopConfiguration().set("parquet.enable.summary-metadata", "false")

df = sc.parallelize([('a', '1.0'), ('b', '2.0')]).toDF(schema=['col0', 'col1'])

# ''.join(string.split()) in order to write a multi-line JSON string here.
catalog = ''.join("""{
    "table":{"namespace":"default", "name":"testtable-1"},
    "rowkey":"key",
    "columns":{
        "col0":{"cf":"rowkey", "col":"key", "type":"string"},
        "col1":{"cf":"cf", "col":"col1", "type":"string"}
    }
}""".split())


print df

# Writing
df.write    \
.options(catalog=catalog)\
.format(data_source_format)\
.save()

0 个答案:

没有答案