我试图将变量的结果写入csv文件,然后从中创建一个json。 for循环的每次迭代都会将以下结果写入变量res_df。如果可以直接创建一个json而无需创建一个csv,那么我也很乐意实现它。请帮忙。
'var_id', 10000001, 14003088.0, 14228946.912793402, 1874168.857698741, 15017976.0, 18000192, 0
现在我想将此结果附加到一个csv文件中,然后从中创建一个json。我已经在我的python代码中实现了它。现在需要您帮助如何在pyspark中实现相同功能
Python代码:
res_df=line,x.min(),np.percentile(x, 25),np.mean(x),np.std(x),np.percentile(x, 75),x.max(),df[line].isnull().mean() * 100
with open(data_output_file, 'a', newline='') as csvfile:
writerows = csv.writer(csvfile, delimiter=',',
quotechar='"', quoting=csv.QUOTE_MINIMAL)
writerows.writerow(map(lambda x: x, res_df))
quality_json_df = pd.read_csv(r'./DQ_RESULT.csv')
# it will dump json to file
quality_json_df.to_json("./Dq_Data.json", orient="records")
我的Pyspark代码
for line in tcp.collect():
#print value in MyCol1 for each row
print line
v3=np.array(data.select(line).collect())
x = v3[np.logical_not(np.isnan(v3))]
print(x)
cnt_null=data.filter((data[line] == "") | data[line].isNull() | isnan(data[line])).count()
print(cnt_null)
res_df=line,x.min(),np.percentile(x, 25),np.mean(x),np.std(x),np.percentile(x, 75),x.max(),cnt_null
print(res_df)
答案 0 :(得分:0)
json_output = []
column_statistic = ["variable_name", "min", "Q1", "mean", "std", "Q3", "max", "null_value"]
for line in tcp.collect():
# print value in MyCol1 for each row
print
line
v3 = np.array(data.select(line).collect())
x = v3[np.logical_not(np.isnan(v3))]
notnan_cnt = np.count_nonzero(v3)
print(x)
cnt_null = data.filter((data[line] == "") | data[line].isNull() | isnan(data[line])).count()
print(cnt_null, notnan_cnt)
res_df = [str(line), x.min(), np.percentile(x, 25), np.mean(x), np.std(x), np.percentile(x, 75), x.max(), cnt_null]
json_row = {key: value for key, value in zip(column_statistic, res_df)}
json_output.append(json_row)
print(res_df)
with open("json_result.json", "w") as fp:
json.dump(json_output, fp)