PySpark:如何使用Pysaprk从rdd结果创建csv文件?

时间:2018-11-21 06:28:08

标签: python python-3.x apache-spark pyspark apache-spark-sql

如何使用pyspark从rdd结果创建和附加csv文件

这是我的代码。对于每次迭代,我需要将结果附加到csv

for line in tcp.collect():
        #print value in MyCol1 for each row                
        print line
        v3=np.array(data.select(line).collect())
        x = v3[np.logical_not(np.isnan(v3))] 
        notnan_cnt=data.filter((data[line] != "").count
        print(x)
        cnt_null=data.filter((data[line] == "") | data[line].isNull() | isnan(data[line])).count()
        print(cnt_null,notnan_cnt)
        res_df=line,x.min(),np.percentile(x, 25),np.mean(x),np.std(x),np.percentile(x, 75),x.max(),cnt_null
        print(res_df)
    with open(data_output_file) as fp:
        wr = csv.writer(fp, dialect='excel')
        wr.writerow(res_df)

rdd的样本结果:res_df

['var_id', 10000001, 14003088.0, 14228946.912793402, 1874168.857698741, 15017976.0, 18000192, 0]

这给我键入错误“ typeError:强制转换为Unicode:需要字符串或缓冲区,找到了RDD”。你能帮忙吗

0 个答案:

没有答案