如何从每个列值中删除多余的引号

时间:2017-12-01 09:04:09

标签: scala apache-spark apache-spark-sql spark-dataframe

从每个列值中删除多余的引号,以下是我的列值:

Array[Array[String]] = Array(Array("58, ""management"", ""married"", ""tertiary"", ""no"", 2143, ""yes"", ""no"", ""unknown"", 5, ""may"", 261, 1, -1, 0, ""unknown"", ""no"""), Array("4
4, ""technician"", ""single"", ""secondary"", ""no"", 29, ""yes"", ""no"", ""unknown"", 5, ""may"", 151, 1, -1, 0, ""unknown"", ""no"""), Array("33, ""entrepreneur"", ""married"", ""secondary
"", ""no"", 2, ""yes"", ""yes"", ""unknown"", 5, ""may"", 76, 1, -1, 0, ""unknown"", ""no"""))

预期输出:

Array[Array[String]] = Array(Array(58, management, married, tertiary, no, 2143, yes, no, unknown, 5, may, 261, 1, -1, 0, unknown, no), Array(44, technician, single, secondary, no, 29, yes, no, unknown, 5, may, 151, 1, -1, 0, unknown, no), Array(33, entrepreneur, married, secondary, no, 2, yes, yes, unknown, 5, may, 76, 1, -1, 0, unknown, no))

以下是代码:

val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
import org.apache.spark.sql._

val data = sc.textFile("simplilearn/Project 1_dataset_bank-full.csv")
val header = data.first()

val data1 = data.filter(row=>row != header)
val finalSet = data1.map(row=>row.split(";"))

上述RDD存储在finalSet RDD。

1 个答案:

答案 0 :(得分:1)

创建最终RRD时,只需删除所有引号即可。替换

val finalSet = data1.map(row=>row.split(";"))

val finalSet = data1.map(row => row.split(";").map(_.trim.replace("\"", "")))