我想在PySpark中有效地将numpy数组从/向工作机器(函数)保存/读取到HDFS。我有两台机器A和B.A有主人和工人。 B有一名工人。对于例如我想实现以下目标:
if __name__ == "__main__":
conf = SparkConf().setMaster("local").setAppName("Test")
sc = SparkContext(conf = conf)
sc.parallelize([0,1,2,3], 2).foreachPartition(func)
def func(iterator):
P = << LOAD from HDFS or Shared Memory as numpy array>>
for x in iterator:
P = P + x
<< SAVE P (numpy array) to HDFS/ shared file system >>
什么是快速有效的方法?
答案 0 :(得分:1)
from hdfs import InsecureClient
from tempfile import TemporaryFile
def get_hdfs_client():
return InsecureClient("<your webhdfs uri>", user="<hdfs user>",
root="<hdfs base path>")
hdfs_client = get_hdfs_client()
# load from file.npy
path = "/whatever/hdfs/file.npy"
tf = TemporaryFile()
with hdfs_client.read(path) as reader:
tf.write(reader.read())
tf.seek(0) # important, set cursor to beginning of file
np_array = numpy.load(tf)
...
# save to file.npy
tf = TemporaryFile()
numpy.save(tf, np_array)
tf.seek(0) # important ! set the cursor to the beginning of the file
# with overwrite=False, an exception is thrown if the file already exists
hdfs_client.write("/whatever/output/file.npy", tf.read(), overwrite=True)
注意:
http://
开头,因为它使用hdfs文件系统的Web界面; /tmp
中的常规文件相比)的优点是,在脚本结束后,确保没有垃圾文件留在群集机器中,通常与否