使用fsid在/ etc / ceph中找到Ceph No cluster conf

时间:2016-01-29 13:53:35

标签: deployment debian distributed-system ceph

我阅读了有关quick ceph deploy的官方文档,我在激活OSD的部分总是遇到同样的错误:

ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1

此命令不起作用并始终显示相同的日志:

[2016-01-29 14:19:54,024][ceph_deploy.conf][DEBUG ] found configuration file at: /home/admin/.cephdeploy.conf
[2016-01-29 14:19:54,032][ceph_deploy.cli][INFO  ] Invoked (1.5.30): /usr/bin/ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1
[2016-01-29 14:19:54,033][ceph_deploy.cli][INFO  ] ceph-deploy options:
[2016-01-29 14:19:54,033][ceph_deploy.cli][INFO  ]  username                      : None
[2016-01-29 14:19:54,034][ceph_deploy.cli][INFO  ]  verbose                       : False
[2016-01-29 14:19:54,035][ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[2016-01-29 14:19:54,036][ceph_deploy.cli][INFO  ]  subcommand                    : activate
[2016-01-29 14:19:54,037][ceph_deploy.cli][INFO  ]  quiet                         : False
[2016-01-29 14:19:54,038][ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f866bc90368>
[2016-01-29 14:19:54,040][ceph_deploy.cli][INFO  ]  cluster                       : ceph
[2016-01-29 14:19:54,041][ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f866bee75f0>
[2016-01-29 14:19:54,042][ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[2016-01-29 14:19:54,043][ceph_deploy.cli][INFO  ]  default_release               : False
[2016-01-29 14:19:54,044][ceph_deploy.cli][INFO  ]  disk                          : [('node2', '/var/local/osd0', None), ('node3', '/var/local/osd1', None)]
[2016-01-29 14:19:54,058][ceph_deploy.osd][DEBUG ] Activating cluster ceph disks node2:/var/local/osd0: node3:/var/local/osd1:
[2016-01-29 14:19:56,498][node2][DEBUG ] connection detected need for sudo
[2016-01-29 14:19:58,497][node2][DEBUG ] connected to host: node2 
[2016-01-29 14:19:58,516][node2][DEBUG ] detect platform information from remote host
[2016-01-29 14:19:58,601][node2][DEBUG ] detect machine type
[2016-01-29 14:19:58,609][node2][DEBUG ] find the location of an executable
[2016-01-29 14:19:58,613][ceph_deploy.osd][INFO  ] Distro info: debian 8.3 jessie
[2016-01-29 14:19:58,615][ceph_deploy.osd][DEBUG ] activating host node2 disk /var/local/osd0
[2016-01-29 14:19:58,617][ceph_deploy.osd][DEBUG ] will use init type: systemd
[2016-01-29 14:19:58,622][node2][INFO  ] Running command: sudo ceph-disk -v activate --mark-init systemd --mount /var/local/osd0
[2016-01-29 14:19:58,816][node2][WARNING] DEBUG:ceph-disk:Cluster uuid is eacfd426-58a3-44e8-a6f0-636a6b23e89e
[2016-01-29 14:19:58,818][node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[2016-01-29 14:19:59,401][node2][WARNING] Traceback (most recent call last):
[2016-01-29 14:19:59,403][node2][WARNING]   File "/usr/sbin/ceph-disk", line 3576, in <module>
[2016-01-29 14:19:59,405][node2][WARNING]     main(sys.argv[1:])
[2016-01-29 14:19:59,406][node2][WARNING]   File "/usr/sbin/ceph-disk", line 3530, in main
[2016-01-29 14:19:59,407][node2][WARNING]     args.func(args)
[2016-01-29 14:19:59,409][node2][WARNING]   File "/usr/sbin/ceph-disk", line 2432, in main_activate
[2016-01-29 14:19:59,410][node2][WARNING]     init=args.mark_init,
[2016-01-29 14:19:59,412][node2][WARNING]   File "/usr/sbin/ceph-disk", line 2258, in activate_dir
[2016-01-29 14:19:59,413][node2][WARNING]     (osd_id, cluster) = activate(path, activate_key_template, init)
[2016-01-29 14:19:59,415][node2][WARNING]   File "/usr/sbin/ceph-disk", line 2331, in activate
[2016-01-29 14:19:59,416][node2][WARNING]     raise Error('No cluster conf found in ' + SYSCONFDIR + ' with fsid %s' % ceph_fsid)
[2016-01-29 14:19:59,418][node2][WARNING] __main__.Error: Error: No cluster conf found in /etc/ceph with fsid eacfd426-58a3-44e8-a6f0-636a6b23e89e
[2016-01-29 14:19:59,443][node2][ERROR ] RuntimeError: command returned non-zero exit status: 1
[2016-01-29 14:19:59,445][ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v activate --mark-init systemd --mount /var/local/osd0

我在Debian 8.3工作。我已经完成了所有这些点,直到OSD激活。我在node2 / var / local / osd0和node3 / var / local / osd1上安装了10GB ext4分区。 在OSDs准备命令之后出现了一些文件但是OSD活动命令仍然无法正常工作。

有人能帮助我吗?

1 个答案:

答案 0 :(得分:1)

之所以发生,是因为我在所有节点上都有相同的磁盘ID。在用fdisk更改了id后,我的群集开始工作了。