csv文件包含具有特殊字符的数据,包括comma(,),(\)和(“”)。无法创建具有正确列数的df? -py-spark

时间:2020-04-09 09:17:44

标签: python apache-spark pyspark

我有一个CSV文件,我想在py-spark中使用该文件创建一个数据框,但无法这样做,因为某些行包含带有特殊字符的数据,并且其列的一半被双引号引起来。以下是数据以及到目前为止我尝试过的内容。

sample_row

"ABG090D",2019-03-03 00:00:00.0000000,"A","some Data C\" AB01","Some Data","LOS","NEW",2019-04-11 00:00:00.0000000,"GHYTR","7860973478","0989","A",2019-03-03 00:00:00.0000000,"Y","N","N","N",1,"N","D016619",,"$,$#,&","Y",
"69901",,,,"FGF",89.00,"W",,"N","R","F",5.00,6.00,6.00,9.00,2.00,0,0,"9090",,"N",,,"1","N",,,"F",,2019-03-03 00:00:00.0000000,,,,,"N","A","N","N","N","N","N",,,,,,,"H",,,,,,,,,,"N","A","0","0","0",,0,0,0,0,0,0,0,"N","00","USA",
"C","I",0,,,,"FGF",0,,,"N","UOIU","5",,0,,0,0,,,"878","N",2019-04-11 09:44:00.0000000,"8980909","H",,,,"N","2","T","SomeData",
2020-03-12 09:24:52.0000000

在以上数据中,我面临的两个主要问题是:

1. “某些数据C \” AB01” =>,因为它包含反斜杠()和引号(“)作为数据的一部分。

2. “ $,$#,&” =>因为它包含逗号(,)作为数据的一部分

df = spark.read.option("quote","\"").option("escape","\"").option("escape","\\").option("delimiter" , ",").option("ignoreLeadingWhiteSpace", "true").csv("/path/file.csv",customSchema)

使用上面的代码,我能够解决“某些Data C \” AB01” ,但是第二列,即“ $,$#,&在这里造成问题。

即使我尝试使用以下链接中给出的答案。但这对我也不起作用。 How to remove double quotes and extra delimiter(s) with in double quotes of TextQualifier file in Scala

1 个答案:

答案 0 :(得分:0)

根据情况构建自己的解析器可能会更好。我编写了一个简单的代码,如下所示,它使用正则表达式来解析文件并将值存储在values列表中。

希望这种方法对您有用。

import re

regex = r"(\"([^\"]+)\",?|([^,]+),?|,)"

test_str = "\"ABG090D\",2019-03-03 00:00:00.0000000,\"A\",\"some Data C\\\" AB01\",\"Some Data\",\"LOS\",\"NEW\",2019-04-11 00:00:00.0000000,\"GHYTR\",\"7860973478\",\"0989\",\"A\",2019-03-03 00:00:00.0000000,\"Y\",\"N\",\"N\",\"N\",1,\"N\",\"D016619\",,\"$,$#,&\",\"Y\", \"69901\",,,,\"FGF\",89.00,\"W\",,\"N\",\"R\",\"F\",5.00,6.00,6.00,9.00,2.00,0,0,\"9090\",,\"N\",,,\"1\",\"N\",,,\"F\",,2019-03-03 00:00:00.0000000,,,,,\"N\",\"A\",\"N\",\"N\",\"N\",\"N\",\"N\",,,,,,,\"H\",,,,,,,,,,\"N\",\"A\",\"0\",\"0\",\"0\",,0,0,0,0,0,0,0,\"N\",\"00\",\"USA\", \"C\",\"I\",0,,,,\"FGF\",0,,,\"N\",\"UOIU\",\"5\",,0,,0,0,,,\"878\",\"N\",2019-04-11 09:44:00.0000000,\"8980909\",\"H\",,,,\"N\",\"2\",\"T\",\"SomeData\", 2020-03-12 09:24:52.0000000"

matches = re.finditer(regex, test_str, re.MULTILINE)
values = []

for matchNum, match in enumerate(matches, start=1):
  if match.group(3) != None:
    values.append(match.group(3))
  elif match.group(2) != None:
    values.append(match.group(2))
  else:
    values.append(None)

print(values)