如何为docker swarm中的flink独立作业集群指定作业工件?

时间:2020-07-16 10:10:40

标签: docker apache-flink docker-swarm

有一个Job Cluster with Docker Swarm的配置示例:

docker service create \
  --name flink-jobmanager \
  --env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \
  --mount type=bind,source=/host/path/to/job/artifacts,target=/opt/flink/usrlib \
  -p 8081:8081 \
  --network flink-job \
  flink:1.11.0-scala_2.11 \
    standalone-job \
    --job-classname com.job.ClassName \
    [--job-id <job id>] \
    [--fromSavepoint /path/to/savepoint [--allowNonRestoredState]] \
    [job arguments]

这意味着您将flink工件jar文件安装到容器的/opt/flink/usrlib,并执行一个由--job-classname命名的专用作业。

我的问题是,如果我有许多具有相同jobname(mainClass)的工件,那么flink如何决定执行哪个工件? 有什么方法可以在standalone-job命令中指定作业工件?

此外,我使用nfs卷安装容器的/opt/flink/usrlib卷配置,如下所示:

  flink_usrlib:
    driver_opts:
      type: "nfs"
      o: "addr=10.225.32.64,nolock,soft,rw"
      device: ":/opt/nfs/flink/usrlib"

所有flink工件jar文件都位于nfs服务器的路径:/opt/nfs/flink/usrlib,我想我可以用一个flink工件设置一个卷,因此只有一个工件可以安装到flink容器,如下所示:

flink-jobmanager-1:
    image: flink:1.10.1-scala_2.12
    depends_on:
      - zookeeper
    ports:
      - "18081:8081"
    volumes:
      - flink_usrlib_artifact1:/opt/flink/usrlib
      - flink_share:/opt/flink/share
      - /etc/localtime:/etc/localtime:ro

flink-jobmanager-2:
    image: flink:1.10.1-scala_2.12
    depends_on:
      - zookeeper
    ports:
      - "18082:8081"
    volumes:
      - flink_usrlib_artifact2:/opt/flink/usrlib
      - flink_share:/opt/flink/share
      - /etc/localtime:/etc/localtime:ro

volumes: 
    flink_usrlib_artifact1:
      driver_opts:
        type: "nfs"
        o: "addr=10.225.32.64,nolock,soft,rw"
        device: ":/opt/nfs/flink/usrlib/artifact1_path"

    flink_usrlib_artifact2:
      driver_opts:
        type: "nfs"
        o: "addr=10.225.32.64,nolock,soft,rw"
        device: ":/opt/nfs/flink/usrlib/artifact2_path"

但是此配置非常多余。我可以像这样使用nfs卷绑定吗:

flink-jobmanager-1:
    image: flink:1.10.1-scala_2.12
    depends_on:
      - zookeeper
    ports:
      - "18081:8081"
    volumes:
      - flink_usrlib/artifact1.jar:/opt/flink/usrlib/artifact1.jar
      - flink_share:/opt/flink/share
      - /etc/localtime:/etc/localtime:ro

flink-jobmanager-2:
    image: flink:1.10.1-scala_2.12
    depends_on:
      - zookeeper
    ports:
      - "18082:8081"
    volumes:
      - flink_usrlib/artifact2.jar:/opt/flink/usrlib/artifact2.jar
      - flink_share:/opt/flink/share
      - /etc/localtime:/etc/localtime:ro

volumes: 
    flink_usrlib:
      driver_opts:
        type: "nfs"
        o: "addr=10.225.32.64,nolock,soft,rw"
        device: ":/opt/nfs/flink/usrlib"

如果可行,我的问题也可以解决。

总的来说,我有一个问题:

  • 如何在docker swarm中为flink独立作业集群配置作业工件?

经过我自己的分析,它变成了两个问题:

  • 如何在flink独立作业集群中指定作业工件?
  • 泊坞窗nfs卷可以将不同的子路径绑定到容器吗?

如果您提供任何建议或解决方案,我将非常感激。

0 个答案:

没有答案