我正在尝试使用以下设置来解析此csv文件。
ArrayType
"[""a"",""ab"",""avc""]"
"[1,23,33]"
"[""1"",""22""]"
"[""1"",""22"",""12222222.32342342314"",123412423523.3414]"
"[a,c,s,a,d,a,q,s]"
"["""","""","""",""""]"
"["","","",""]"
"[""abcgdjasc"",""jachdac"",""''""]"
"[""a"",""ab"",""avc""]"
val df = spark.read.format("csv").option("header","true").option("escape","\"").option("quote","\"").load("/home/ArrayType.csv")
输出:
scala> df.show()
+--------------------+
| ArrayType|
+--------------------+
| ["a","ab","avc"]|
| [1,23,33]|
| ["1","22"]|
|["1","22","122222...|
| [a,c,s,a,d,a,q,s]|
| ["","","",""]|
| [",",","]|
|["abcgdjasc","jac...|
| ["a","ab","avc"]|
+--------------------+
但是,由于这里的转义字符为“ \”“,因此我可以将其作为单个列读取,而如果输入文件如下所示,
ArrayType
"["a","ab","avc"]"
"[1,23,33]"
"["1","22"]"
"["1","22","12222222.32342342314",123412423523.3414]"
"[a,c,s,a,d,a,q,s]"
"["","","",""]"
"[",",","]"
"["abcgdjasc","jachdac","''"]"
"["a","ab","avc"]"
它向我显示以下输出,而我需要它以与以前相同的方式进行解析。
scala> df.show()
+-----------------+-------+--------------------+-------------------+
| _c0| _c1| _c2| _c3|
+-----------------+-------+--------------------+-------------------+
| "["a"| ab| "avc"]"| |
| [1,23,33]| | | |
| "["1"| "22"]"| | |
| "["1"| 22|12222222.32342342314|123412423523.3414]"|
|[a,c,s,a,d,a,q,s]| | | |
| [",",","]| | | |
| [| ,| ]| |
| "["abcgdjasc"|jachdac| "''"]"| |
| "["a"| ab| "avc"]"| |
| "["a"| ab| "avc"]"| |
+------+-------------+-----------------+-------+--------------------
因此,即使不对字符串进行转义,我仍然希望获得与前一个相同的输出,而不会用逗号分隔。
如何在数据框中将第二个csv文件作为单列获取?
如何支持将两种文件解析为单个列?
我正在使用univocity CSV解析器进行解析。