我的输入csv文件中有一条记录为,
"2017-11-01","2017-10-29","2017-11-04","4532491","","","","Natural States: "The Environmental Imagination" in Maine, Oregon, and the Nation","1000","Richard W. Judd"
当我在pyspark中读取此csv时,字段"Natural States: "The Environmental Imagination" in Maine, Oregon, and the Nation"
被分隔为单独的列。
>>> df = spark.read.csv('file.csv')
>>> df.show(truncate=False)
+----------+----------+----------+----------+----+----+----+---------------------------------------------------------+-------+----------------+----+---------------+
|_c0 |_c1 |_c2 |_c3 |_c4 |_c5 |_c6 |_c7 |_c8 |_c9 |_c10|_c11 |
+----------+----------+----------+----------+----+----+----+---------------------------------------------------------+-------+----------------+----+---------------+
|2017-11-01|2017-10-29|2017-11-04| 4532491 |null|null|null|Natural States: "The Environmental Imagination" in Maine | Oregon| and the Nation |1000|Richard W. Judd|
+----------+----------+----------+----------+----+----+----+---------------------------------------------------------+-------+----------------+----+---------------+
除了更改输入文件中的分隔符之外的任何解决方法,因为我们无法更改输入文件。
答案 0 :(得分:2)
您可以使用sparkContext
来阅读文件,将split
多个字符用作","
,然后将rdd
转换为dataframe
,如下所示
rdd = sc.textFile("file.csv")
def replaceFunc(words):
result = []
for word in words.split("\",\""):
result.append(word.replace("\"", ""))
return result
rdd.map(replaceFunc).toDF().show(1, False)
您应该有以下输出
+----------+----------+----------+-------+---+---+---+------------------------------------------------------------------------------+----+---------------+
|_1 |_2 |_3 |_4 |_5 |_6 |_7 |_8 |_9 |_10 |
+----------+----------+----------+-------+---+---+---+------------------------------------------------------------------------------+----+---------------+
|2017-11-01|2017-10-29|2017-11-04|4532491| | | |Natural States: The Environmental Imagination in Maine, Oregon, and the Nation|1000|Richard W. Judd|
+----------+----------+----------+-------+---+---+---+------------------------------------------------------------------------------+----+---------------+
答案 1 :(得分:0)
这可能适用于sep='","'
,如:
spark.read.csv('file.csv', sep='","')