我试图在两个ec2实例中安装Ceph,遵循此guide 但是我无法创建osd。 我的群集只有两台服务器,使用此命令时无法创建分区:
ceph-deploy osd create host:xvdb:/dev/xvdb1 host:xvdf:/dev/xvdf1
[WARNIN] command_check_call: Running command: /sbin/mkfs -t xfs -K -f -- /dev/xvdf1
[WARNIN] can't get size of data subvolume
[WARNIN] Usage: mkfs.xfs
[WARNIN] /* blocksize */ [-b log=n|size=num]
[WARNIN] /* metadata */ [-m crc=0|1,finobt=0|1,uuid=xxx]
[WARNIN] /* data subvol */ [-d agcount=n,agsize=n,file,name=xxx,size=num,
[WARNIN] (sunit=value,swidth=value|su=num,sw=num|noalign),
[WARNIN] sectlog=n|sectsize=num
[WARNIN] /* force overwrite */ [-f]
[WARNIN] /* inode size */ [-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,
[WARNIN] projid32bit=0|1]
[WARNIN] /* no discard */ [-K]
[WARNIN] /* log subvol */ [-l agnum=n,internal,size=num,logdev=xxx,version=n
[WARNIN] sunit=value|su=num,sectlog=n|sectsize=num,
[WARNIN] lazy-count=0|1]
[WARNIN] /* label */ [-L label (maximum 12 characters)]
[WARNIN] /* naming */ [-n log=n|size=num,version=2|ci,ftype=0|1]
[WARNIN] /* no-op info only */ [-N]
[WARNIN] /* prototype file */ [-p fname]
[WARNIN] /* quiet */ [-q]
[WARNIN] /* realtime subvol */ [-r extsize=num,size=num,rtdev=xxx]
[WARNIN] /* sectorsize */ [-s log=n|size=num]
[WARNIN] /* version */ [-V]
[WARNIN] devicename
[WARNIN] <devicename> is required unless -d name=xxx is given.
[WARNIN] <num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),
[WARNIN] xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).
[WARNIN] <value> is xxx (512 byte blocks).
[WARNIN] '/sbin/mkfs -t xfs -K -f -- /dev/xvdf1' failed with status code 1
[ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/xvdf /dev/xvdf1
[ceph_deploy][ERROR ] GenericError: Failed to create 2 OSDs
我试图创建OSD的两个磁盘中发生了同样的错误 这是我正在使用的ceph.conf文件:
fsid = b3901613-0b17-47d2-baaa-26859c457737
mon_initial_members = host1,host2
mon_host = host1,host2
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd mkfs options xfs = -K
public network = ip.ip.ip.0/24, ip.ip.ip.0/24
cluster network = ip.ip.0.0/24
osd pool default size = 2 # Write an object 2 times
osd pool default min size = 1 # Allow writing 1 copy in a degraded state
osd pool default pg num = 256
osd pool default pgp num = 256
osd crush chooseleaf type = 3
有人知道如何解决这个问题吗?
答案 0 :(得分:1)
&gt;&gt; ceph-deploy osd create host:xvdb:/ dev / xvdb1 host:xvdf:/ dev / xvdf1
您需要使用DATA分区开发名称和日记分区开发名称。所以就像
ceph-deploy osd create host:/ dev / xvdb1:/ dev / xvdb2 host:/ dev / xvdf1:/ dev / xvdf2
此外,当您手动创建这些分区时,您需要将设备的所有权更改为ceph:ceph以使ceph-deploy工作。 示例:chown ceph:ceph / dev / xvdb * 示例:chown ceph:ceph / dev / xvdf *
注意:如果你没有指定日志磁盘,即[/ dev / xvdb2或/ dev / xvdf2],ceph-deploy将使用文件而不是磁盘分区来存储日志。
- 迪帕克