Hadoop资源管理器无法通过docker stack deploy在docker容器上启动

时间:2019-02-01 09:00:02

标签: hadoop docker-compose docker-swarm docker-stack

Hadoop资源危险无法连接到namenode。我已经使用docker stack deploy将hadoop部署到了docker容器。

我看过日志,有 “ java.lang.IllegalArgumentException:java.net.UnknownHostException:namenode”。
日志:

2019-02-01T05:04:52.730306146Z java.lang.IllegalArgumentException: java.net.UnknownHostException: namenode
2019-02-01T05:04:52.730309023Z  at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
2019-02-01T05:04:52.730311815Z  at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:320)
2019-02-01T05:04:52.730314328Z  at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
2019-02-01T05:04:52.730323720Z  at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:687)
2019-02-01T05:04:52.730326599Z  at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:628)
2019-02-01T05:04:52.730329153Z  at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
2019-02-01T05:04:52.730331693Z  at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
2019-02-01T05:04:52.730334150Z  at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
2019-02-01T05:04:52.730336614Z  at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2701)
2019-02-01T05:04:52.730339150Z  at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2683)
2019-02-01T05:04:52.730341689Z  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:372)
2019-02-01T05:04:52.730344155Z  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:171)
2019-02-01T05:04:52.730349666Z  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:356)
2019-02-01T05:04:52.730352223Z  at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
2019-02-01T05:04:52.730354827Z  at org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.startInternal(FileSystemRMStateStore.java:141)
2019-02-01T05:04:52.730357555Z  at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.serviceStart(RMStateStore.java:562)
2019-02-01T05:04:52.730359988Z  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
2019-02-01T05:04:52.730362383Z  at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:564)
2019-02-01T05:04:52.730365019Z  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
2019-02-01T05:04:52.730367490Z  at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:974)
2019-02-01T05:04:52.730369745Z  at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1015)
2019-02-01T05:04:52.730372604Z  at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1011)
2019-02-01T05:04:52.730375154Z  at java.security.AccessController.doPrivileged(Native Method)
2019-02-01T05:04:52.730377563Z  at javax.security.auth.Subject.doAs(Subject.java:422)
2019-02-01T05:04:52.730379954Z  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
2019-02-01T05:04:52.730382419Z  at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1011)
2019-02-01T05:04:52.730384894Z  at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1051)
2019-02-01T05:04:52.730387416Z  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
2019-02-01T05:04:52.730389852Z  at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1188)

当我使用docker-compose up运行时,它运行良好。我需要使用docker stack运行,因为我想使用swarm manager进行管理。

我已经尝试从此容器内部对ping namenode进行尝试,这是非常奇怪的“ namenode”答案

root@resourcemanager:/# ping namenode
PING namenode (10.0.0.22) 56(84) bytes of data.
64 bytes from 10.0.0.22 (10.0.0.22): icmp_seq=1 ttl=64 time=0.087 ms
64 bytes from 10.0.0.22 (10.0.0.22): icmp_seq=2 ttl=64 time=0.095 ms
64 bytes from 10.0.0.22 (10.0.0.22): icmp_seq=3 ttl=64 time=0.042 ms
64 bytes from 10.0.0.22 (10.0.0.22): icmp_seq=4 ttl=64 time=0.092 ms
64 bytes from 10.0.0.22 (10.0.0.22): icmp_seq=5 ttl=64 time=0.096 ms
64 bytes from 10.0.0.22 (10.0.0.22): icmp_seq=6 ttl=64 time=0.107 ms
64 bytes from 10.0.0.22 (10.0.0.22): icmp_seq=7 ttl=64 time=0.104 ms

docker-compose.yml

version: "3.3"

services:
  namenode:
    image: hadoop-namenode:2.7.7
    hostname: namenode
    volumes:
      - ./data/namenode:/hadoop/dfs/name
    environment:
      - CLUSTER_NAME=test
    env_file:
      - ./hadoop.env
    ports:
      - 50070:50070
      - 8020:8020
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure
    networks:
      - hadoopnet
  resourcemanager:
    image: hadoop-resourcemanager:2.7.7
    hostname: resourcemanager
    depends_on:
      - namenode
    env_file:
      - ./hadoop.env
    ports:
      - 8088:8088
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure
    networks:
      - hadoopnet
  networks:
    hadoopnet:
      external:
        name: hadoopnet

0 个答案:

没有答案