转换为CSV文件后读取数据帧会在Scala中呈现不正确的数据帧

时间:2018-07-15 22:24:45

标签: scala apache-spark dataframe apache-spark-sql

我正在尝试将以下数据帧写入csv文件:

df

    +--------------------+-------------------------+----------------------------+------------------------------+----------------+-----+--------------------+--------------------+--------+-----+------------+
|               title|UserData.UserValue._title|UserData.UserValue._valueRef|UserData.UserValue._valuegiven|UserData._idUser|  _id|              author|         description|   genre|price|publish_date|
+--------------------+-------------------------+----------------------------+------------------------------+----------------+-----+--------------------+--------------------+--------+-----+------------+
|XML Developer's G...|          _CONFIG_CONTEXT|                       #id13|                           qwe|              18|bk101|Gambardella, Matthew|An in-depth look ...|Computer|44.95|  2000-10-01|
|       Midnight Rain|          _CONFIG_CONTEXT|                       #id13|                        dfdfrt|              19|bk102|          Ralls, Kim|A former architec...| Fantasy| 5.95|  2000-12-16|
|     Maeve Ascendant|          _CONFIG_CONTEXT|                       #id13|                          dfdf|              20|bk103|         Corets, Eva|After the collaps...| Fantasy| 5.95|  2000-11-17|
+--------------------+-------------------------+----------------------------+------------------------------+----------------+-----+--------------------+--------------------+--------+-----+------------+

我正在使用以下代码写入csv文件:

df.write.format("com.databricks.spark.csv").option("header", "true").save("hdfsOut")

使用它在文件夹csv中创建3个不同的hdfsOut文件。当我尝试使用

读取该数据框时
var csvdf = spark.read.format("org.apache.spark.csv").option("header", true).csv("hdfsOut")
csvdf.show()

它以不正确的格式显示数据框,如下所示:

+--------------------+-------------------------+----------------------------+------------------------------+----------------+-----+--------------------+--------------------+-----+-----+------------+
|               title|UserData.UserValue._title|UserData.UserValue._valueRef|UserData.UserValue._valuegiven|UserData._idUser|  _id|              author|         description|genre|price|publish_date|
+--------------------+-------------------------+----------------------------+------------------------------+----------------+-----+--------------------+--------------------+-----+-----+------------+
|     Maeve Ascendant|          _CONFIG_CONTEXT|                       #id13|                          dfdf|              20|bk103|         Corets, Eva|After the collaps...| null| null|        null|
|      society in ...|      the young surviv...|                        null|                          null|            null| null|                null|                null| null| null|        null|
|      foundation ...|                  Fantasy|                        5.95|                    2000-11-17|            null| null|                null|                null| null| null|        null|
|       Midnight Rain|          _CONFIG_CONTEXT|                       #id13|                        dfdfrt|              19|bk102|          Ralls, Kim|A former architec...| null| null|        null|
|      an evil sor...|      and her own chil...|                        null|                          null|            null| null|                null|                null| null| null|        null|
|      of the world."|                  Fantasy|                        5.95|                    2000-12-16|            null| null|                null|                null| null| null|        null|
|XML Developer's G...|          _CONFIG_CONTEXT|                       #id13|                           qwe|              18|bk101|Gambardella, Matthew|An in-depth look ...| null| null|        null|
|         with XML...|                 Computer|                       44.95|                    2000-10-01|            null| null|                null|                null| null| null|        null|
+--------------------+-------------------------+----------------------------+------------------------------+----------------+-----+--------------------+--------------------+-----+-----+------------+

我需要此csv文件才能将其馈送到Amazon Athena。当我这样做时,Athena还会以第二个输出中所示的相同格式呈现数据。理想情况下,从转换后的csv文件中读取文件后,它应该只显示3行。

知道为什么会发生这种情况以及如何解决此问题以将CSV数据呈现为第一个输出中所示的正确格式吗?

1 个答案:

答案 0 :(得分:1)

您在description列中的数据应具有new line characterscommas的数据,如下所示

"After the collapse of a nanotechnology \nsociety in England, the young survivors lay the \nfoundation for a new society"

因此出于测试目的,我创建了一个数据框

val df = Seq(
  ("Maeve Ascendant", "_CONFIG_CONTEXT", "#id13", "dfdf", "20", "bk103", "Corets, Eva", "After the collapse of a nanotechnology \nsociety in England, the young survivors lay the \nfoundation for a new society", "Fantasy", "5.95", "2000-11-17")
).toDF("title", "UserData.UserValue._title", "UserData.UserValue._valueRef", "UserData.UserValue._valuegiven", "UserData._idUser", "_id", "author", "description", "genre", "price", "publish_date")

df.show()向我展示了与问题相同的数据框格式

+---------------+-------------------------+----------------------------+------------------------------+----------------+-----+-----------+--------------------+-------+-----+------------+
|          title|UserData.UserValue._title|UserData.UserValue._valueRef|UserData.UserValue._valuegiven|UserData._idUser|  _id|     author|         description|  genre|price|publish_date|
+---------------+-------------------------+----------------------------+------------------------------+----------------+-----+-----------+--------------------+-------+-----+------------+
|Maeve Ascendant|          _CONFIG_CONTEXT|                       #id13|                          dfdf|              20|bk103|Corets, Eva|After the collaps...|Fantasy| 5.95|  2000-11-17|
+---------------+-------------------------+----------------------------+------------------------------+----------------+-----+-----------+--------------------+-------+-----+------------+

但是df.show(false)给出的准确值为

+---------------+-------------------------+----------------------------+------------------------------+----------------+-----+-----------+---------------------------------------------------------------------------------------------------------------------+-------+-----+------------+
|title          |UserData.UserValue._title|UserData.UserValue._valueRef|UserData.UserValue._valuegiven|UserData._idUser|_id  |author     |description                                                                                                          |genre  |price|publish_date|
+---------------+-------------------------+----------------------------+------------------------------+----------------+-----+-----------+---------------------------------------------------------------------------------------------------------------------+-------+-----+------------+
|Maeve Ascendant|_CONFIG_CONTEXT          |#id13                       |dfdf                          |20              |bk103|Corets, Eva|After the collapse of a nanotechnology 
society in England, the young survivors lay the 
foundation for a new society|Fantasy|5.95 |2000-11-17  |
+---------------+-------------------------+----------------------------+------------------------------+----------------+-----+-----------+---------------------------------------------------------------------------------------------------------------------+-------+-----+------------+

当您将其另存为csv时, spark将其另存为带有换行符和逗号的文本文件,将其视为简单的文本csv文件。并且以csv格式,换行符生成新行,而逗号生成新字段这就是数据中的罪魁祸首格式。


解决方案1 ​​

您可以使用 parquet 格式将数据框另存为 parquet保存数据框的属性并将其读取为 parquet

df.write.parquet("hdfsOut")
var csvdf = spark.read.parquet("hdfsOut")


解决方案2

将其保存为csv格式,并在读取它时使用multiLine选项

df.write.format("com.databricks.spark.csv").option("header", "true").save("hdfsOut")
var csvdf = spark.read.format("org.apache.spark.csv").option("multiLine", "true").option("header", true).csv("hdfsOut")

我希望答案会有所帮助