读取csv

时间:2018-08-22 07:59:09

标签: scala csv apache-spark dataframe

我有一个csv,其数据形状如下:

0,0;1,0;2,0;3,0;4,0;6,0;8,0;9,1
4,0;2,1;2,0;1,0;1,0;0,1;3,0;1,0;"BC"
4,0;2,1;2,0;1,0;1,0;0,1;4,0;1,0;"BC"
4,0;2,1;2,0;1,0;1,0;0,1;5,0;1,0;"BC"
4,0;2,1;2,0;1,0;1,0;0,1;6,0;1,0;"BC"

我想将其转换为最后一列为“值”的数据框。我已经在Scala中编写了这段代码:

val rawdf = spark.read.format("csv")
                 .option("header", "true")
                 .option("delimiter", ";")
                 .load(CSVPATH)

但是我得到一个rawdf.show(numRows = 4)的结果:

+---+---+---+---+---+---+---+---+
|0,0|1,0|2,0|3,0|4,0|6,0|8,0|9,1|
+---+---+---+---+---+---+---+---+
|4,0|2,1|2,0|1,0|1,0|0,1|3,0|1,0|
|4,0|2,1|2,0|1,0|1,0|0,1|4,0|1,0|
|4,0|2,1|2,0|1,0|1,0|0,1|5,0|1,0|
|4,0|2,1|2,0|1,0|1,0|0,1|6,0|1,0|
+---+---+---+---+---+---+---+---+

如何在spark上添加最后一列?我应该把它写在csv文件上吗?

3 个答案:

答案 0 :(得分:3)

这是一种无需更改CSV文件即可完成此操作的方法,您可以在代码中设置架构:

val schema = StructType(
    Array(
        StructField("0,0", StringType),
        StructField("1,0", StringType),
        StructField("2,0", StringType),
        StructField("3,0", StringType),
        StructField("4,0", StringType),
        StructField("6,0", StringType),
        StructField("8,0", StringType),
        StructField("9,1", StringType), 
        StructField("X", StringType)
    )
)

val rawdf = 
    spark.read.format("csv")
        .option("header", "true")
        .option("delimiter", ";")
        .schema(schema)
        .load("tmp.csv")

答案 1 :(得分:0)

Spark尝试根据您设置的标题列的可用数量来映射数据列:

.option("header", "true")

您可以通过以下两种方式之一解决此问题:

  1. 设置标头= false
  2. 为最后一个数据列添加标题列,或者仅在标题行的末尾添加分号(;)。

例如:

0,0;1,0;2,0;3,0;4,0;6,0;8,0;9,1;
4,0;2,1;2,0;1,0;1,0;0,1;3,0;1,0;"BC"
4,0;2,1;2,0;1,0;1,0;0,1;4,0;1,0;"BC"
4,0;2,1;2,0;1,0;1,0;0,1;5,0;1,0;"BC"
4,0;2,1;2,0;1,0;1,0;0,1;6,0;1,0;"BC"

OR

0,0;1,0;2,0;3,0;4,0;6,0;8,0;9,1;col_end
4,0;2,1;2,0;1,0;1,0;0,1;3,0;1,0;"BC"
4,0;2,1;2,0;1,0;1,0;0,1;4,0;1,0;"BC"
4,0;2,1;2,0;1,0;1,0;0,1;5,0;1,0;"BC"
4,0;2,1;2,0;1,0;1,0;0,1;6,0;1,0;"BC"

答案 2 :(得分:0)

如果您不知道数据行的长度,则可以将其读为rdd,进行一些解析,然后创建一个模式形成一个dataframe,如下所示

//read the data as rdd and split the lines 
val rddData = spark.sparkContext.textFile(CSVPATH)
    .map(_.split(";", -1))

//getting the max length from data and creating the schema
val maxlength = rddData.map(x => (x, x.length)).map(_._2).max
val schema = StructType((1 to maxlength).map(x => StructField(s"col_${x}", StringType, true)))

//parsing the data with the maxlength and populating null where no data and using the schema to form dataframe
val rawdf = spark.createDataFrame(rddData.map(x => Row.fromSeq((0 to maxlength-1).map(index => Try(x(index)).getOrElse("null")))), schema)

rawdf.show(false)

应该给您

+-----+-----+-----+-----+-----+-----+-----+-----+-----+
|col_1|col_2|col_3|col_4|col_5|col_6|col_7|col_8|col_9|
+-----+-----+-----+-----+-----+-----+-----+-----+-----+
|0,0  |1,0  |2,0  |3,0  |4,0  |6,0  |8,0  |9,1  |null |
|4,0  |2,1  |2,0  |1,0  |1,0  |0,1  |3,0  |1,0  |"BC" |
|4,0  |2,1  |2,0  |1,0  |1,0  |0,1  |4,0  |1,0  |"BC" |
|4,0  |2,1  |2,0  |1,0  |1,0  |0,1  |5,0  |1,0  |"BC" |
|4,0  |2,1  |2,0  |1,0  |1,0  |0,1  |6,0  |1,0  |"BC" |
+-----+-----+-----+-----+-----+-----+-----+-----+-----+

我希望答案会有所帮助