我正在使用pydoop在pyspark中读写文件。我想用gzip格式编写我的作业输出。我当前的代码如下所示:
def create_data_distributed(workerNum,outputDir, centers, noSamples = 10, var = 0.1):
numCenters = centers.shape[0]
dim = centers.shape[1]
fptr_out = hdfs.hdfs().open_file(os.path.join(outputDir, ("part-%05d" % workerNum) ) + ".txt", "w")
for idx in range(noSamples):
idxCenter = np.random.randint(numCenters)
sample = centers[idxCenter] + np.random.normal(size=(1,dim))
# output the sample. Need to
fptr_out.write("%d, " % idxCenter)
for i in range(len(sample[0])):
fptr_out.write("%f " %(sample[0][i]))
if (i < (len(sample[0])-1)):
fptr_out.write(",")
fptr_out.write("\n")
fptr_out.close()
return
如何让此代码打开并编写gzip文件而非常规文件?
感谢!!!
答案 0 :(得分:2)
我希望您可以通过包装返回的类似文件的对象来实现:
fptr_out = hdfs.hdfs().open_file(...)
hdfs_file = hdfs.hdfs().open_file(...)
fptr_out = gzip.GzipFile(mode='wb', fileobj=hdfs_file)
请注意,你必须在两个地方都叫关闭:
fptr_out.close()
hdfs_file.close()
使用with
语句更清楚:
output_filename = os.path.join(outputDir, ("part-%05d" % workerNum) ) + ".txt.gz"
with hdfs.hdfs().open_file(output_filename, "wb") as hdfs_file:
with gzip.GzipFile(mode='wb', fileobj=hdfs_file) as fptr_out:
...
这都是未经测试的。使用风险自负。