ceph creat osd失败[ceph_deploy] [错误] GenericError:无法创建1个OSD

时间:2018-08-10 07:17:58

标签: ceph

ceph-admin节点:

function schedule()
        {
            time = document.getElementById('txtScheduleInterval').value;
            timeOptions = document.getElementById("scheduleOptions").value;
            alert(time);
        }

ceph-node1:

root@ryan-VirtualBox:~# ceph-deploy osd create --data /dev/sdb1 node1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create --data /dev/sdb1 node1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f3887e84950>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : node1
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f38882eeaa0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdb1
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdb1
[node1][DEBUG ] connected to host: node1 
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.osd][DEBUG ] Deploying osd to node1
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node1][DEBUG ] find the location of an executable
[node1][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb1
[node1][WARNIN] -->  RuntimeError: command returned non-zero exit status: 22
[node1][DEBUG ] Running command: /usr/bin/ceph-authtool --gen-print-key
[node1][DEBUG ] Running command: /usr/bin/ceph --cluster ceph --name 
client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 7de2ad9f-7f6f-4134-8fe6-1c72e98f5758
[node1][DEBUG ] Running command: /sbin/vgcreate --force --yes ceph-8c190b87-b0f2-4cc4-986d-e30a2157b70e /dev/sdb1
[node1][DEBUG ]  stderr: 
[node1][DEBUG ]  stderr: Can't open /dev/sdb1 exclusively.  Mounted filesystem?
[node1][DEBUG ]  stderr: 
[node1][DEBUG ] --> Was unable to complete a new OSD, will rollback changes
[node1][DEBUG ] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.11 --yes-i-really-mean-it
[node1][DEBUG ]  stderr: no valid command found; 10 closest matches:
[node1][DEBUG ]  stderr: 
[node1][DEBUG ]  stderr: osd tier add-cache <poolname> <poolname> <int[0-]>
[node1][DEBUG ]  stderr: 
[node1][DEBUG ]  stderr: osd tier remove-overlay <poolname>
[node1][DEBUG ]  stderr: 
[node1][DEBUG ]  stderr: osd out <ids> [<ids>...]
[node1][DEBUG ]  stderr: 
[node1][DEBUG ]  stderr: osd in <ids> [<ids>...]
[node1][DEBUG ]  stderr: 
[node1][DEBUG ]  stderr: osd down <ids> [<ids>...]
[node1][DEBUG ]  stderr: 
[node1][DEBUG ]  stderr: osd unset 
full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim
[node1][DEBUG ]  stderr: 
[node1][DEBUG ]  stderr: osd require-osd-release luminous|mimic {--yes-i-really-mean-it}
[node1][DEBUG ]  stderr: 
[node1][DEBUG ]  stderr: osd erasure-code-profile ls
[node1][DEBUG ]  stderr: 
[node1][DEBUG ]  stderr: osd set 
full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-
scrub|notieragent|nosnaptrim|sortbitwise|recovery_deletes|require_jewel_osds|require_kraken_osds {--yes-i-really-mean-it}
[node1][DEBUG ]  stderr: 
[node1][DEBUG ]  stderr: osd erasure-code-profile get <name>
[node1][DEBUG ]  stderr: 
[node1][DEBUG ]  stderr: Error EINVAL: invalid command
[node1][DEBUG ]  stderr: 
[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb1
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

1 个答案:

答案 0 :(得分:0)

我遇到很多问题

解决方案

步骤1:

parted -s /dev/sdb mklabel gpt mkpart primary xfs 0% 100%

第二步:

reboot 

第3步:

mkfs.xfs /dev/sdb -f

我测试过了!