将数据从 flink 流式传输到 S3

时间:2021-05-10 16:47:24

标签: scala hadoop amazon-s3 apache-flink amazon-emr

我在 amazon EMR 上使用 Flink,并希望将我的管道结果流式传输到 s3 存储桶。

我使用的是 Flink 版本 => 1.11.2

这是一个代码片段,显示了代码现在的样子:

val outputPath = new Path("s3://test/flinkStreamTest/failureLogs/dt=2021-04-15/")

val sink: StreamingFileSink[String] = StreamingFileSink
      .forRowFormat(outputPath, new SimpleStringEncoder[String]("UTF-8"))
      .withRollingPolicy(
        DefaultRollingPolicy.builder()
          .withRolloverInterval(TimeUnit.MINUTES.toMillis(15))
          .withInactivityInterval(TimeUnit.MINUTES.toMillis(5))
          .withMaxPartSize(1024 * 1024 * 1024)
          .build()
      )
      .build()

val enrichedStream = AsyncDataStream
      .unorderedWait(
        resConsumer,
        new AsyncElasticRequest(elasticIndexName, elasticHost, elasticPort),
        asyncTimeOut.toInt, TimeUnit.MILLISECONDS,
        asyncCapacity.toInt
      ) // this is my pipeline result. it returns a string

enrichedStream.addSink(sink)
   
env.execute("run pipeline") // this is just to run the pipeline

这是我目前遇到的错误;

java.lang.UnsupportedOperationException: Recoverable writers on Hadoop are only supported for HDFS
    at org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriter.<init>(HadoopRecoverableWriter.java:61)
    at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.createRecoverableWriter(HadoopFileSystem.java:202)
    at org.apache.flink.core.fs.SafetyNetWrapperFileSystem.createRecoverableWriter(SafetyNetWrapperFileSystem.java:69)
    at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$RowFormatBuilder.createBuckets(StreamingFileSink.java:260)
    at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.initializeState(StreamingFileSink.java:396)
    at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:185)
    at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:167)
    at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
    at org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:106)
    at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:258)
    at org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:290)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:479)
    at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:47)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:475)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:528)
    at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:546)
    at java.lang.Thread.run(Thread.java:748)

我已将 s3-fs-hadoop jar 文件放在 plugins/s3-fs-hadoop 文件夹中。 我在 /usr/lib/flink/lib 中也有相同的 s3-fs-hadoop jar,以防万一 flink 也在该文件夹中查找 s3-fs-hadoop jar。 请有人可以帮助我。 我已经搜索和搜索,但似乎无法解决它。

谢谢

1 个答案:

答案 0 :(得分:0)

我想通了。我需要重新启动整个 flink 长时间运行的应用程序(不是重新启动作业)。 还必须删除我放在 /usr/lib/flink/lib 目录中的 s3-fs-hadoop jar,但在 plugins/s3-fs-hadoop 文件夹中保留了 s3-fs-hadoop jar 的副本。