如何通过删除字段周围的引号和双引号来格式化 CSV 数据

时间:2021-04-13 05:09:54

标签: scala spark-shell

我正在使用一个数据集,显然它的每一行都有“双引号”。我无法看到它,因为它在我使用浏览器时默认使用 Excel 打开。

数据集看起来像这样(原始):

"age;"job";"marital";"education";"default";"balance";"housing";"loan";"contact";"day";"month";"duration";"campaign";"pdays";"previous";"poutcome";"y""----header 58;"management";"married";"tertiary";"no";2143;"yes";"no";"unknown";5;"may";261;1;-1;0;"unknown";"no"--row

我使用以下代码:

val bank = spark.read.format("com.databricks.spark.csv").
 | option("header", true).
 | option("ignoreLeadingWhiteSpace", true).
 | option("inferSchema", true).
 | option("quote", "").
 | option("delimiter", ";").
 | load("bank_dataset.csv")

但我得到的是: Data with quotes on either end and string values wrapped in double-double quotes 我想要的是: age as int and single quotes wrapped around string values

1 个答案:

答案 0 :(得分:0)

如果你还有这些原始数据并且想要清理,那么你可以使用regex_replace来替换所有的双引号"

val expr = df.columns
.map(c => regexp_replace(col(c), "\"", "").as(c.replaceAll("\"", "")))

df.select(expr: _*).show(false)

输出:

+---+----------+-------+---------+-------+-------+-------+----+-------+---+-----+--------+--------+-----+--------+--------+---+
|age|job       |marital|education|default|balance|housing|loan|contact|day|month|duration|campaign|pdays|previous|poutcome|y  |
+---+----------+-------+---------+-------+-------+-------+----+-------+---+-----+--------+--------+-----+--------+--------+---+
|58 |management|married|tertiary |no     |2143   |yes    |no  |unknown|5  |may  |261     |1       |-1   |0       |unknown |no |
+---+----------+-------+---------+-------+-------+-------+----+-------+---+-----+--------+--------+-----+--------+--------+---+