如何在pyspark DataFrame中加载CSV文件

时间:2019-02-13 13:15:32

标签: python apache-spark pyspark

如何将csv文件更改为DataFrame

csv值-

country,2015,2016,2017,2018,2019
Norway,4.141,4.152,4.157,4.166,4.168
Australia,4.077,4.086,4.093,4.110,4.115
Switzerland,4.009,4.036,4.032,4.041,4.046
Netherlands,3.977,3.994,4.043,4.045,4.045
UnitedStates,4.017,4.027,4.039,4.045,4.050
Germany,3.988,3.999,4.017,4.026,4.028
NewZealand,3.982,3.997,3.993,3.999,4.018

我想要DataFrame /表格格式-

 +----------------------------------------+
 |   Country| 1980| 1985| 1990| 2000| 2005|    
 +----------+-----+-----+-----+-----+-----+    
 |    Norway|4.141|4.152|4.157|4.166|4.168|      
 | Australia|4.077 ...
 ......
 ......
 ......    
 |NewZealand|.......................|4.018|
 +----------------------------------------+

1 个答案:

答案 0 :(得分:0)

阅读文档here。假设您的文件filename.csv存储在path,那么您可以通过非常基本的配置将其导入。

# Specify a schema
schema = StructType([
        StructField('country', StringType()),
        StructField('2015', StringType()),
        StructField('2016', StringType()),
        StructField('2017', StringType()),
        StructField('2018', StringType()),
        StructField('2019', StringType()),
        ])

# Start the import
df = spark.read.schema(schema)\
               .format("csv")\
               .option("header","true")\
               .option("sep",",")\
               .load("path/filename.csv")

请记住,您的数字将作为字符串导入,因为PySpark无法识别thousands分隔点.。您必须将它们转换为数字,如下所示-

# Convert them to numerics
from pyspark.sql.functions import regexp_replace
cols_with_thousands_separator = ['2015','2016','2017','2018','2019']
for c in cols_with_thousands_separator:
    df = df.withColumn(c, regexp_replace(col(c), '\\.', ''))\
           .withColumn(c, col(c).cast("int"))