我有一个spark聚合,我想将结果输出到csv,但是我发现spark总是以科学计数法输出大量的小数。我已经尝试过this question中提到的解决方案,但这也没有用。
预期输出:
foo,avg(bar)
a,0.0000002
b,0.0000001
实际输出:
foo,avg(bar)
a,2.0E-7
b,1.0E-7
请参见以下示例:
from os import path
import shutil
import glob
from pyspark.sql import SQLContext, functions as F, types
def test(sc):
sq = SQLContext(sc)
data = [("a", 1e-7), ("b", 1e-7), ("a", 3e-7)]
df = sq.createDataFrame(data, ['foo', 'bar'])
# 12 digits with 9 decimal places
decType = types.DecimalType(precision=12, scale=9)
# Cast both the column input and column output to Decimal
aggs = [F.mean(F.col("bar").cast(decType)).cast(decType)]
groups = [F.col("foo")]
result = df.groupBy(*groups).agg(*aggs)
write(result)
return df, aggs, groups, result
def write(result):
tmpDir = path.join("res", "tmp")
config = {"sep": ","}
result.write.format("csv")\
.options(**config)\
.save(tmpDir)
# Once the distributed portion is done, write out to a single a file
allFiles = glob.glob(path.join(tmpDir,"*.csv"))
fullOut = path.join("res", "final.csv")
with open(fullOut, 'wb') as wfd:
# First write out the header row
header = config.get("sep", ',').join(result.columns)
wfd.write(header + "\n")
for f in allFiles:
with open(f, 'rb') as fd:
shutil.copyfileobj(fd, wfd)
pass
pass
shutil.rmtree(tmpDir)
return
在pyspark外壳程序中:
import spark_test as t
t.test(sc)
答案 0 :(得分:1)
>>> df1 = spark.createDataFrame([('a','2.0e-7'),('b','1e-5'),('c','1.0e-7')],['foo','avg'])
>>> df1.show()
+---+------+
|foo| avg|
+---+------+
| a|2.0e-7|
| b| 1e-5|
| c|1.0e-7|
+---+------+
>>> df1.select('foo','avg',format_string('%.7f',df1.avg.cast('float')).alias('converted')).show()
+---+------+---------+
|foo| avg|converted|
+---+------+---------+
| a|2.0e-7|0.0000002|
| b| 1e-5|0.0000100|
| c|1.0e-7|0.0000001|
+---+------+---------+
答案 1 :(得分:0)
您是否尝试将汇总结果转换为String
?这样,excel不会将值识别为十进制,因此不会给出科学的符号表示法。