尝试使用to_csv将镶木地板文件保存为CSV时出错

时间:2018-02-03 07:22:49

标签: python pandas csv pyspark

我正在尝试读取其中包含一些实验室数据的镶木地板文件,然后将其加载到临时表中,对该表执行查询,然后将结果保存为CSV文件,并以列和逗号分隔。这是我的代码:

lines = sqlContext.read.parquet("hdfs:////data/lab_01/")
lines.registerTempTable("test_data")
resultsDF = sqlContext.sql("select * from results")

header = ["lab_key", "tray_id", "time", "gene_id", "source"]
pandas.resultsDF.to_csv("/data/results.csv", sep=",", columns = header)

我得到的错误是这,这是在最后一行代码:

  

AttributeError:module' pandas'没有属性' resultsDF'

我正在寻找带有标题的CSV文件,如下所示:

lab_key  tray_id   time   gene_id  Source
10       26905972   1     8315     2        
30       26984972   1     8669     2        
30       26949059   1     1023     2        
30      26905972    1     1062     1    

以下是我的数据框resultsDF:

[Row(lab_key=1130, tray_id=26984905972, time=1, gene_id=833715, source=2),
 Row(lab_key=1130, tray_id=26984905972, time=1, gene_id=866950, source=2),
 Row(lab_key=1130, tray_id=26984905972, time=1, gene_id=1022843, source=2),

3 个答案:

答案 0 :(得分:1)

要回答这个问题:你需要像这样转换为Pandas&转换为csv

resultsDF.toPandas().to_csv(" ")

这是一个糟糕的方法,因为如果需要只是保存为csv,则无需转换为Pandas DataFrame,您应该使用以下方法

resultsDF.repartition(1).write.format('com.databricks.spark.csv').save('path+my.csv',header = 'true')

答案 1 :(得分:0)

You have Spark dataframe which you need to first convert into pandas.

import pandas as pd

lines = sqlContext.read.parquet("hdfs:////data/lab_01/")
lines.registerTempTable("test_data")
resultsDF = sqlContext.sql("select * from results").toDF()


resDF=pd.DataFrame(resultDF)
header = ["lab_key", "tray_id", "time", "gene_id", "source"]

# removed pandas from below line
resDF.to_csv("/data/results.csv", sep=",", columns = header)

答案 2 :(得分:0)

You can you below option:

df.rdd.map(lambda line: ",".join[t1 for t1 in line]).saveAsTextFile("filename")

df.rdd.map(lambda line: ",".join(map(str, line) ) ).saveAsTextFile("filename")

Let ms know if this helps.