Flink 1.5中的批处理表API问题 - 对Streaming API的需求的抱怨

时间:2018-06-19 08:30:53

标签: apache-flink flink-sql

我正在尝试使用Flink 1.5.0创建面向批处理的Flink作业,并希望使用Table和SQL API来处理数据。我的问题是尝试创建BatchTableEnviroment我得到一个编译错误

  

BatchJob.java:[46,73]无法访问org.apache.flink.streaming.api.environment.StreamExecutionEnvironment

导致

final BatchTableEnvironment bTableEnv = TableEnvironment.getTableEnvironment(bEnv);

据我所知,我不依赖流媒体环境。 我的代码如下面的代码段所示。

import org.apache.flink.api.common.typeinfo.TypeInformation;
import org.apache.flink.api.java.ExecutionEnvironment;
import org.apache.flink.table.api.Table;
import org.apache.flink.table.api.TableEnvironment;
import org.apache.flink.table.api.java.BatchTableEnvironment;
import org.apache.flink.table.sources.CsvTableSource;
import org.apache.flink.table.sources.TableSource;

import java.util.Date;


public class BatchJob {

    public static void main(String[] args) throws Exception {
        final ExecutionEnvironment bEnv = ExecutionEnvironment.getExecutionEnvironment();
        // create a TableEnvironment for batch queries
        final BatchTableEnvironment bTableEnv = TableEnvironment.getTableEnvironment(bEnv);
    ... do stuff
    // execute program
        bEnv.execute("MY Batch Jon");
    }

我的pom依赖关系如下

<dependencies>
        <!-- Apache Flink dependencies -->
        <!-- These dependencies are provided, because they should not be packaged into the JAR file. -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-java</artifactId>
            <version>${flink.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table_2.11</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-scala_2.11</artifactId>
            <version>${flink.version}</version>
        </dependency>




        <!-- Add connector dependencies here. They must be in the default scope (compile). -->


        <!-- Example:

        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-kafka-0.10_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>
        -->

        <!-- Add logging framework, to produce console output when running in the IDE. -->
        <!-- These dependencies are excluded from the application JAR by default. -->
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>1.7.7</version>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
            <scope>runtime</scope>
        </dependency>

    </dependencies>

有人可以帮助我了解Streaming API的依赖性以及我为批处理作业需要它的原因吗? 非常感谢您的帮助。 奥利弗

1 个答案:

答案 0 :(得分:0)

Flink的Table API和SQL支持是用于批处理和流处理的统一API。批处理和流执行以及Scala / Java Table API和SQL之间共享许多内部类,因此可以链接到Flink的批处理流依赖性。

由于具有这些通用类,因此也需要对flink-streaming依赖项进行批处理查询。