如何从空RDD读取Avro架构?

时间:2017-12-04 15:12:29

标签: apache-spark avro spark-avro

我使用AvroKeyInputFormat来阅读avro文件:

val records = sc.newAPIHadoopFile[AvroKey[T], NullWritable, AvroKeyInputFormat[T]](path)
  .map(_._1.datum())

因为我需要反思我工作中的架构,所以我得到了这样的Avro架构:

val schema = records.first.getSchema

不幸的是,如果path中的avro文件为空(包括编写器架构,但没有记录),则会失败。

即使没有记录,是否有一种简单的方法只能使用Spark加载avro架构?

1 个答案:

答案 0 :(得分:2)

我找到了一个解决方案(灵感来自com.databricks.spark.avro.DefaultSource):

/**
  * Loads a schema from avro files in `directory`. This method also works if none
  * of the avro files contain any records.
  */
def schema(directory: String)(implicit sc: SparkContext): Schema = {
  val fs = FileSystem.get(new URI(directory), sc.hadoopConfiguration)
  val it = fs.listFiles(new Path(directory), false)

  var avroFile: Option[FileStatus] = None

  while (it.hasNext && avroFile.isEmpty) {
    val fileStatus = it.next()

    if (fileStatus.isFile && fileStatus.getPath.getName.endsWith(".avro")) {
      avroFile = Some(fileStatus)
    }
  }

  avroFile.fold {
    throw new Exception(s"No avro files found in $directory")
  } { file =>
    val in = new FsInput(file.getPath, sc.hadoopConfiguration)
    try {
      val reader = DataFileReader.openReader(in, new GenericDatumReader[GenericRecord]())
      try {
        reader.getSchema
      } finally {
        reader.close()
      }
    } finally {
      in.close()
    }
  }
}