如何将自定义表源和自定义表接收器与SQL Client集成在一起?

时间:2018-09-05 03:27:40

标签: apache-flink

假设我们定义了一个自定义的TableSource和TableSink,那么如何与SQL Client集成?我是否需要手动注册如下的自定义TableSource \ Sink名称?如果不手动注册,连接器如何键入与custom1TableSource相关的custom1 map \?

StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
TableSource custom1TableSource  = new custom1TableSource ( );
tableEnv.registerTableSource("custom1", custom1TableSource);

然后在下面配置环境文件?

   tables:
      - name: custom1TableSource
        type: Source
        update-mode: append
        connector:
          property-version: 1
          type: ***custom1****

我声明的源和接收器:

package com.abc;
public static class custom1TableSource implements StreamTableSource<Row>, DefinedRowtimeAttributes, DefinedProctimeAttribute {


package com.abc;
public static class custom1TableSink implements TableSink<Row>, AppendStreamTableSink<Row> {

https://ci.apache.org/projects/flink/flink-docs-release-1.6/dev/table/sqlClient.html#configuration

更新:

从源代码中进行一些检查后,我发现Flink通过实现StreamTableSinkFactory和ServiceLoader创建的Factory创建接收器和源实例,但是如何将接收器和源名称注册到TableSource和TabSink类?

2 个答案:

答案 0 :(得分:0)

我得到了答案,它需要覆盖requiredContext(),并手动编写connector.type,以kafka为例,它需要为connector.type分配“ kafka”:

public abstract class KafkaTableSourceSinkFactoryBase implements
        StreamTableSourceFactory<Row>,
        StreamTableSinkFactory<Row> {

@Override
public Map<String, String> requiredContext() {
    Map<String, String> context = new HashMap<>();
    context.put(UPDATE_MODE(), UPDATE_MODE_VALUE_APPEND()); // append mode
    **context.put(CONNECTOR_TYPE(), CONNECTOR_TYPE_VALUE_KAFKA); // kafka**
    context.put(CONNECTOR_VERSION(), kafkaVersion()); // version
    context.put(CONNECTOR_PROPERTY_VERSION(), "1"); // backwards compatibility
    return context;
}

答案 1 :(得分:0)

请查看documentation for user-defined sources & sinks

SQL客户端和表&SQL API均使用通过Java的服务提供商接口(SPI)发现的TableFactory