我可以使用以下Apache Spark Scala代码将文本文件成功加载到DataFrame中:
val df = spark.read.text("first.txt")
.withColumn("fileName", input_file_name())
.withColumn("unique_id", monotonically_increasing_id())
有没有办法在一次运行中提供多个文件?像这样:
val df = spark.read.text("first.txt,second.txt,someother.txt")
.withColumn("fileName", input_file_name())
.withColumn("unique_id", monotonically_increasing_id())
目前,以下代码不适用于以下错误:
Exception in thread "main" org.apache.spark.sql.AnalysisException: Path does not exist: file:first.txt,second.txt,someother.txt;
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:558)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:545)
如何正确加载多个文本文件?
答案 0 :(得分:2)
函数spark.read.text()
具有一个docs中的varargs参数:
def text(paths: String*): DataFrame
这意味着要读取多个文件,只需将它们提供给以逗号分隔的功能,即
val df = spark.read.text("first.txt", "second.txt", "someother.txt")