我想通过spark摄取许多小文本文件到镶木地板。目前,我使用wholeTextFiles
并执行一些解析。
更准确地说 - 这些小文本文件是ESRi ASCII Grid文件,每个文件的最大大小约为400kb。 GeoTools用于解析它们,如下所述。
您是否看到任何优化可能性?也许有些东西可以避免产生不必要的物体?或者更好地处理小文件的东西。我想知道仅仅获取文件的路径并手动读取它们而不是使用String -> ByteArrayInputStream
是否更好。
case class RawRecords(path: String, content: String)
case class GeometryId(idPath: String, value: Double, geo: String)
@transient lazy val extractor = new PolygonExtractionProcess()
@transient lazy val writer = new WKTWriter()
def readRawFiles(path: String, parallelism: Int, spark: SparkSession) = {
import spark.implicits._
spark.sparkContext
.wholeTextFiles(path, parallelism)
.toDF("path", "content")
.as[RawRecords]
.mapPartitions(mapToSimpleTypes)
}
def mapToSimpleTypes(iterator: Iterator[RawRecords]): Iterator[GeometryId] = iterator.flatMap(r => {
val extractor = new PolygonExtractionProcess()
// http://docs.geotools.org/latest/userguide/library/coverage/arcgrid.html
val readRaster = new ArcGridReader(new ByteArrayInputStream(r.content.getBytes(StandardCharsets.UTF_8))).read(null)
// TODO maybe consider optimization of known size instead of using growable data structure
val vectorizedFeatures = extractor.execute(readRaster, 0, true, null, null, null, null).features
val result: collection.Seq[GeometryId] with Growable[GeometryId] = mutable.Buffer[GeometryId]()
while (vectorizedFeatures.hasNext) {
val vectorizedFeature = vectorizedFeatures.next()
val geomWKTLineString = vectorizedFeature.getDefaultGeometry match {
case g: Geometry => writer.write(g)
}
val geomUserdata = vectorizedFeature.getAttribute(1).asInstanceOf[Double]
result += GeometryId(r.path, geomUserdata, geomWKTLineString)
}
result
})
答案 0 :(得分:2)
我有建议:
wholeTextFile
- > mapPartitions
- >转换为数据集。为什么?如果在数据集上生成mapPartitions
,则所有行都将从内部格式转换为对象 - 这会导致其他序列化。binaryFiles
,它会为您提供Stream
,因此您无法在mapPartitions