我在S3(或其他版本)中有一个远程文件,我需要该文件的架构。
我没有找到像JSON(e.g. read.option("samplingRation", 0.25)
)一样采样数据的选项。
有没有一种方法可以优化模式的读取?
Spark在返回推断的架构之前,会通过网络读取整个CSV文件。对于大文件,这可能需要相当长的时间。
.option("samplingRatio", samplingRatioVal)
在csv上不起作用
答案 0 :(得分:1)
/**
* infer schema for a remote csv file by reading a sample of the file and infering on that.
* the spark-infer-schema behavior by default reads the entire dataset once!
* for large remote files this is not desired. (e.g. inferring schema on a 3GB file across oceans takes a while)
* speedup is achieved by only reading the first `schemaSampleSize` rows
*
* @param fileLocation
* @param schemaSampleSize rows to be taken into consideration for infering the Schema
* @param headerOption
* @param delimiterOption
* @return
*/
def inferSchemaFromSample(sparkSession: SparkSession, fileLocation: String, schemaSampleSize: Int, headerOption: Boolean, delimiterOption: String): StructType = {
val dataFrameReader: DataFrameReader = sparkSession.read
val dataSample: Array[String] = dataFrameReader.textFile(fileLocation).head(schemaSampleSize)
val firstLine = dataSample.head
import sparkSession.implicits._
val ds: Dataset[String] = sparkSession.createDataset(dataSample)
val extraOptions = new scala.collection.mutable.HashMap[String, String]
extraOptions += ("inferSchema" -> "true")
extraOptions += ("header" -> headerOption.toString)
extraOptions += ("delimiter" -> delimiterOption)
val csvOptions: CSVOptions = new CSVOptions(extraOptions.toMap, sparkSession.sessionState.conf.sessionLocalTimeZone)
val schema: StructType = TextInputCSVDataSource.inferFromDataset(sparkSession, ds, Some(firstLine), csvOptions)
schema
}
例如
schemaSampleSize = 10000
delimiterOption =','