flink sql客户端中的“ FlinkKafkaConsumerBase的实现不可序列化”

时间:2020-05-19 09:04:58

标签: apache-flink flink-streaming flink-sql

我正在使用具有kafka源和avro格式的flink sql客户端,配置如下:

[^a-zA-Z]

表“事件”已成功创建,但是当我运行tables: - name: event # name the new table type: source # declare if the table should be "source", "sink", or "both" # declare the external system to connect to connector: type: kafka version: "universal" topic: event startup-mode: latest-offset properties: zookeeper.connect: localhost:2181 bootstrap.servers: localhost:9092 group.id: testgroup # declare a format for this system format: type: avro record-class: "org.flink.event" # declare the schema of the table schema: - name: fields data-type: ROW<`id` BIGINT,`time` BIGINT> 时出现错误:
select * from event;
kafka主题除“字段”外还有更多列,但我只想首先尝试一列。

0 个答案:

没有答案