我想从kafka主题中读取流数据并以avro或镶木地板格式写入S3。数据流看起来像json字符串,但是我无法以avro或镶木地板格式转换并写入S3。
我找到了一些代码片段并尝试了
val sink = StreamingFileSink .forBulkFormat(新路径(outputS3Path),ParquetAvroWriters.forReflectRecord(classOf [myClass])) .build()
但是我在addSink上收到了“类型不匹配,预期的SinkFunction [String],实际:StreamingFileSink [TextOut]”
val流= env .addSource(myConsumerSource) .addSink(sink)
请帮助,谢谢!
答案 0 :(得分:0)
解决方法,您可以在基本etl将SQL查询Flink表转换为String并从AWS Console写入Kinesis,然后以拼花形式写入S3之后,使用AWS Kinesis Firehose。
Kafka示例:- https://github.com/kali786516/FlinkStreamAndSql/tree/master/src/main/scala/com/aws/examples/kafka
答案 1 :(得分:0)
这是我用来将Parquet文件存储到本地系统中的代码。
import org.apache.avro.generic.GenericRecord
import org.apache.avro.{Schema, SchemaBuilder}
import org.apache.flink.core.fs.Path
import org.apache.flink.formats.parquet.avro.ParquetAvroWriters
import org.apache.flink.streaming.api.datastream.DataStreamSource
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment
import org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink
val env = StreamExecutionEnvironment.getExecutionEnvironment()
env.enableCheckpointing(100)
val schema = SchemaBuilder
.record("record")
.fields()
.requiredString("message")
.endRecord()
val stream: DataStreamSource[GenericRecord] = env.fromCollection(genericRecordList)
val path = new Path(s"/tmp/flink-parquet-${System.currentTimeMillis()}")
val sink: StreamingFileSink[GenericRecord] = StreamingFileSink
.forBulkFormat(path, ParquetAvroWriters.forGenericRecord(schema))
.build()
stream.addSink(sink)
env.execute()