我正在将 Spark2.3 与 Scala 结合使用,并尝试从目录中加载多个csv文件,但遇到了加载文件但缺少一些列的问题他们
我有以下示例文件
test1.csv
Col1,Col2,Col3,Col4,Col5
aaa,2,3,4,5
aaa,2,3,4,5
aaa,2,3,4,5
aaa,2,3,4,5
aaa,2,3,4,5
aaa,2,3,4,5
aaa,2,3,4,5
aaa,2,3,4,5
aaa,2,3,4,5
test2.csv
Col1,Col2,Col3,Col4
aaa,2,3,4
aaa,2,3,4
aaa,2,3,4
aaa,2,3,4
aaa,2,3,4
aaa,2,3,4
aaa,2,3,4
aaa,2,3,4
aaa,2,3,4
test3.csv
Col1,Col2,Col3,Col4,Col6
aaa,2,3,4,6
aaa,2,3,4,6
aaa,2,3,4,6
aaa,2,3,4,6
aaa,2,3,4,6
aaa,2,3,4,6
aaa,2,3,4,6
aaa,2,3,4,6
aaa,2,3,4,6
test4.csv
Col1,Col2,Col5,Col4,Col3
aaa,2,5,4,3
aaa,2,5,4,3
aaa,2,5,4,3
aaa,2,5,4,3
aaa,2,5,4,3
aaa,2,5,4,3
aaa,2,5,4,3
aaa,2,5,4,3
aaa,2,5,4,3
我想做的就是将所有这些文件加载到一个数据帧中,其中所有列都在4个文件中,但是当我尝试使用以下代码加载文件时
val dft = spark.read.format("csv").option("header", "true").load("path/to/directory/*.csv")
它会加载csv,但会丢失csv中的某些列。
这是 dft.show()
的输出+----+----+----+----+----+
|Col1|Col2|Col3|Col4|Col6|
+----+----+----+----+----+
| aaa| 2| 3| 4| 6|
| aaa| 2| 3| 4| 6|
| aaa| 2| 3| 4| 6|
| aaa| 2| 3| 4| 6|
| aaa| 2| 3| 4| 6|
| aaa| 2| 3| 4| 6|
| aaa| 2| 3| 4| 6|
| aaa| 2| 3| 4| 6|
| aaa| 2| 3| 4| 6|
| aaa| 2| 5| 4| 3|
| aaa| 2| 5| 4| 3|
| aaa| 2| 5| 4| 3|
| aaa| 2| 5| 4| 3|
| aaa| 2| 5| 4| 3|
| aaa| 2| 5| 4| 3|
| aaa| 2| 5| 4| 3|
| aaa| 2| 5| 4| 3|
| aaa| 2| 5| 4| 3|
| aaa| 2| 3| 4| 5|
| aaa| 2| 3| 4| 5|
+----+----+----+----+----+
我希望它像这样
+----+----+----+----+----+----+
|Col1|Col2|Col3|Col4|Col5|Col6|
+----+----+----+----+----+----+
| aaa| 2| 3| 4| 5| 6|
| aaa| 2| 3| 4| 5| 6|
| aaa| 2| 3| 4| 5| 6|
| aaa| 2| 3| 4| 5| 6|
| aaa| 2| 3| 4| 5| 6|
| aaa| 2| 3| 4| 5| 6|
| aaa| 2| 3| 4| 5| 6|
| aaa| 2| 3| 4| 5| 6|
| aaa| 2| 3| 4| 5| 6|
| aaa| 2| 3| 4| 5| 6|
| aaa| 2| 3| 4| 5| 6|
| aaa| 2| 3| 4| 5| 6|
| aaa| 2| 3| 4| 5| 6|
| aaa| 2| 3| 4| 5| 6|
| aaa| 2| 3| 4| 5| 6|
| aaa| 2| 3| 4| 5| 6|
| aaa| 2| 3| 4| 5| 6|
| aaa| 2| 3| 4| 5| 6|
| aaa| 2| 3| 4| 5| 6|
| aaa| 2| 3| 4| 5| 6|
+----+----+----+----+----+----+
请指导我我的代码有什么问题? 还是有其他有效的方法吗?
谢谢
答案 0 :(得分:0)
如果每个文件都不太大,则可以使用wholeTextFile
并按如下方式自己解析文件:
val columns = (1 to 6).map("Col"+_)
val rdd = sc.wholeTextFiles("path_to_files/*")
.map(_._2.split("\\n"))
.flatMap(x=> {
// We consider the first line as the header
val cols = x.head.split(",")
// Then we flatten the remaining lines and shape each of them
// as a list of tuples (ColumnName, content).
x.tail
.map(_.split(","))
.map(row => row.indices.map(i => cols(i) -> row(i)))
})
.map(_.toMap)
// Here we take the list of all the colmuns and map each of them to
// its value if it exists, null otherwise.
.map(map => columns.map(name => map.getOrElse(name, null) ))
.map(Row.fromSeq _)
此代码使用wholeTextFile
将每个文件放在单个记录中(这就是文件不能太大的原因),使用第一行来确定存在哪些列以及按什么顺序创建一个Map列Map名称转换为值,并在缺少值时将其转换为包含null的行。然后,数据准备好进入数据框:
val schema = StructType(
columns.map(name => StructField(name, StringType, true))
)
spark.createDataFrame(rdd, schema).show()
答案 1 :(得分:0)
我找到了要解决的问题的解决方案,所以我认为应该与任何试图实现相同输出的人分享。
我用Parquet解决了一些具有常见列的不同文件中的合并任务。
这是代码
val conf = new SparkConf()
.setAppName("Exercise")
.setMaster("local")
val sc = new SparkContext(conf)
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
val spark = SparkSession
.builder()
.appName("Spark Sql Session")
.config("spark.some.config.option", "test")
.getOrCreate()
val filepath = sc.wholeTextFiles("path/to/MergeFiles/*.txt").keys
val list = filepath.collect().toList
var i = 1
list.foreach{ path =>
val df = sqlContext.read
.format("com.databricks.spark.csv")
.option("delimiter", ",")
.option("header", "true")
.load(path)
df.write.parquet("data/test_tbl/key="+ i)
i +=1
}
val mergedDF = spark.read.option("mergeSchema", "true").parquet("data/test_tbl")
mergedDF.write.format("csv").save("target/directory/for/mergedFiles")
以下是mergedDF.show()
+----+----+----+----+----+----+---+
|Col1|Col2|Col3|Col4|Col6|Col5|key|
+----+----+----+----+----+----+---+
|aaa |2 |3 |4 |6 |null|2 |
|aaa |2 |3 |4 |6 |null|2 |
|aaa |2 |3 |4 |6 |null|2 |
|aaa |2 |3 |4 |6 |null|2 |
|aaa |2 |3 |4 |6 |null|2 |
|aaa |2 |3 |4 |6 |null|2 |
|aaa |2 |3 |4 |6 |null|2 |
|aaa |2 |3 |4 |6 |null|2 |
|aaa |2 |3 |4 |6 |null|2 |
|aaa |2 |3 |4 |null|5 |4 |
|aaa |2 |3 |4 |null|5 |4 |
|aaa |2 |3 |4 |null|5 |4 |
|aaa |2 |3 |4 |null|5 |4 |
|aaa |2 |3 |4 |null|5 |4 |
|aaa |2 |3 |4 |null|5 |4 |
|aaa |2 |3 |4 |null|5 |4 |
|aaa |2 |3 |4 |null|5 |4 |
|aaa |2 |3 |4 |null|5 |4 |
|aaa |2 |3 |4 |null|5 |3 |
|aaa |2 |3 |4 |null|5 |3 |
+----+----+----+----+----+----+---+