我只找到TextInputFormat和CsvInputFormat。那么如何使用Apache Flink读取HDFS中的镶木地板文件?
答案 0 :(得分:2)
好的。我已经找到了一种通过Apache Flink读取HDFS中的实木复合地板文件的方法。
您应该在pom.xml中添加以下依赖项
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-hadoop-compatibility_2.11</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-avro</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.parquet</groupId>
<artifactId>parquet-avro</artifactId>
<version>1.10.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-core</artifactId>
<version>3.1.1</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>3.1.1</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>1.2.1</version>
</dependency>
创建一个avsc文件来定义架构。 Exp:
{"namespace": "com.flinklearn.models",
"type": "record",
"name": "AvroTamAlert",
"fields": [
{"name": "raw_data", "type": ["string","null"]}
]
}
运行“ java -jar D:\ avro-tools-1.8.2.jar编译模式alert.avsc”。生成Java类并将AvroTamAlert.java复制到您的项目中。
使用AvroParquetInputFormat读取hdfs中的实木复合地板文件:
class Main {
def startApp(): Unit ={
val env = ExecutionEnvironment.getExecutionEnvironment
val job = Job.getInstance()
val dIf = new HadoopInputFormat[Void, AvroTamAlert](new AvroParquetInputFormat(), classOf[Void], classOf[AvroTamAlert], job)
FileInputFormat.addInputPath(job, new Path("/user/hive/warehouse/testpath"))
val dataset = env.createInput(dIf)
println(dataset.count())
env.execute("start hdfs parquet test")
}
}
object Main {
def main(args:Array[String]):Unit = {
new Main().startApp()
}
}