通过在spark中使用scala加载csv文件来创建数据帧

时间:2018-03-06 02:37:30

标签: scala csv apache-spark dataframe apache-spark-sql

但是csv文件添加了额外的双引号,这会导致所有cloum成单列

有四列,标题和2行

"""SlNo"",""Name"",""Age"",""contact"""
"1,""Priya"",78,""Phone"""
"2,""Jhon"",20,""mail"""

val df = sqlContext.read.format("com.databricks.spark.csv").option("header","true").option("delimiter",",").option("inferSchema","true").load ("bank.csv") 
df: org.apache.spark.sql.DataFrame = ["SlNo","Name","Age","contact": string]

1 个答案:

答案 0 :(得分:1)

您可以使用sparkContext将所有 "替换为 empty 并使用zipWithIndex()来分隔标题和文本数据,以便可以创建自定义架构行rdd 数据。最后,只需在 sqlContext的createDataFrame api

中使用行rdd 架构
//reading text file, replacing and splitting and finally zipping with index
val rdd = sc.textFile("bank.csv").map(_.replaceAll("\"", "").split(",")).zipWithIndex()
//separating header to form schema
val header = rdd.filter(_._2 == 0).flatMap(_._1).collect()
val schema = StructType(header.map(StructField(_, StringType, true)))
//separating data to form row rdd
val rddData = rdd.filter(_._2 > 0).map(x => Row.fromSeq(x._1))
//creating the dataframe
sqlContext.createDataFrame(rddData, schema).show(false)

你应该得到

+----+-----+---+-------+
|SlNo|Name |Age|contact|
+----+-----+---+-------+
|1   |Priya|78 |Phone  |
|2   |Jhon |20 |mail   |
+----+-----+---+-------+

我希望答案很有帮助