docker-compose v3 + apache spark,端口7077上的连接被拒绝

时间:2017-03-02 19:36:34

标签: apache-spark docker docker-swarm

我不确定这是100%编程还是与系统管理员相关的问题。

我试图在版本3中设置docker-compose文件,用于docker-swarm,docker版本1.13,以测试我本地工作流程的火花。

可悲的是,端口7077只绑定到我的swarm群集上的localhost,因此无法从外部世界访问,我的spark应用程序正在尝试连接到它。

有没有人有想法,如何在swarm模式下使用docker-compose来绑定到所有接口?

我发布了我的端口,这适用于8080,但不适用于7070.

nmap输出:

Starting Nmap 7.01 ( https://nmap.org ) at 2017-03-02 11:27 PST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000096s latency).
Other addresses for localhost (not scanned): ::1
Not shown: 994 closed ports
PORT     STATE SERVICE
22/tcp   open  ssh
80/tcp   open  http
443/tcp  open  https
8080/tcp open  http-proxy
8081/tcp open  blackice-icecap
8888/tcp open  sun-answerbook

端口说明

8081 is my spark worker
8080 is my spark master frontend
8888 is the spark hue frontend

nmap未列出7077

使用netstat:

tcp        0      0 0.0.0.0:22              0.0.0.0:*                   LISTEN      1641/sshd       
tcp6       0      0 :::4040                 :::*                    LISTEN      1634/dockerd    
tcp6       0      0 :::2377                 :::*                    LISTEN      1634/dockerd    
tcp6       0      0 :::7946                 :::*                    LISTEN      1634/dockerd    
tcp6       0      0 :::80                   :::*                    LISTEN      1634/dockerd    
tcp6       0      0 :::8080                 :::*                    LISTEN      1634/dockerd    
tcp6       0      0 :::8081                 :::*                    LISTEN      1634/dockerd    
tcp6       0      0 :::6066                 :::*                    LISTEN      1634/dockerd    
tcp6       0      0 :::22                   :::*                    LISTEN      1641/sshd       
tcp6       0      0 :::8888                 :::*                    LISTEN      1634/dockerd    
tcp6       0      0 :::443                  :::*                    LISTEN      1634/dockerd    
tcp6       0      0 :::7077                 :::*                    LISTEN      1634/dockerd  

我可以在localhost上通过telnet连接到7077而没有任何问题,但在localhost之外我收到连接拒绝错误。

此时此刻(请告诉我,我不是系统管理员,我是软件人),我开始觉得这与docker网格有某种关系网络

我的主配置的Docker撰写部分:

#the spark master, having to run on the frontend of the cluster
 master:
  image: eros.fiehnlab.ucdavis.edu/spark
  command: bin/spark-class org.apache.spark.deploy.master.Master -h master
  hostname: master
  environment:
    MASTER: spark://master:7077
    SPARK_CONF_DIR: /conf
    SPARK_PUBLIC_DNS: blonde.fiehnlab.ucdavis.edu
  ports:
    - 4040:4040
    - 6066:6066
    - 8080:8080
    - 7077:7077
  volumes:
    - /tmp:/tmp/data
  networks:
    - spark
    - frontends
  deploy:
    placement:
      #only run on manager node
      constraints:
        - node.role == manager

网络spark和frontend都是覆盖网络

1 个答案:

答案 0 :(得分:1)

问题是docker-compose文件中的配置错误。原始配置中的-h master始终绑定到本地主机接口。

即使指定了SPARK_LOCAL_IP值

 master:
  image: eros.fiehnlab.ucdavis.edu/spark:latest
  command: bin/spark-class org.apache.spark.deploy.master.Master 
  hostname: master
  environment:
    SPARK_CONF_DIR: /conf
    SPARK_PUBLIC_DNS: blonde.fiehnlab.ucdavis.edu
    SPARK_LOCAL_IP: 0.0.0.0
  ports:
    - 4040:4040
    - 6066:6066
    - 8080:8080
    - 7077:7077
  volumes:
    - /tmp:/tmp/data
  deploy:
    placement:
      #only run on manager node
      constraints:
        - node.role == manager