出于成本原因,我们只有两个OSD节点,但是我想使用3个副本,以便清理可以修复任何数据错误,我很难确定如何为此最好地定义一个压缩规则。
这是Ceph模仿。
集群看起来像这样:
$ ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 20.95917 root default
-3 10.47958 host san10
0 ssd 1.74660 osd.0 up 1.00000 1.00000
1 ssd 1.74660 osd.1 up 1.00000 1.00000
2 ssd 1.74660 osd.2 up 1.00000 1.00000
3 ssd 1.74660 osd.3 up 1.00000 1.00000
4 ssd 1.74660 osd.4 up 1.00000 1.00000
5 ssd 1.74660 osd.5 up 1.00000 1.00000
-5 10.47958 host san11
6 ssd 1.74660 osd.6 up 1.00000 1.00000
7 ssd 1.74660 osd.7 up 1.00000 1.00000
8 ssd 1.74660 osd.8 up 1.00000 1.00000
9 ssd 1.74660 osd.9 up 1.00000 1.00000
10 ssd 1.74660 osd.10 up 1.00000 1.00000
11 ssd 1.74660 osd.11 up 1.00000 1.00000
我的规则是:
rule myrule {
id 2
type replicated
min_size 3
max_size 4
step take default
step choose firstn 2 type host
step chooseleaf firstn 2 type osd
step emit
}
运行美眉工具以测试规则:
$ crushtool -c map.txt -o map.bin && crushtool -i map.bin --test --show-statistics --show-mappings --rule 2 --min-x 1 --max-x 10 --num-rep 3
rule 2 (myrule), x = 1..10, numrep = 3..3
CRUSH rule 2 x 1 [9,11,5]
CRUSH rule 2 x 2 [1,3,9]
CRUSH rule 2 x 3 [0,4,11]
CRUSH rule 2 x 4 [8,10,5]
CRUSH rule 2 x 5 [3,0,7]
CRUSH rule 2 x 6 [2,4,6]
CRUSH rule 2 x 7 [9,6,1]
CRUSH rule 2 x 8 [2,5,7]
CRUSH rule 2 x 9 [9,8,4]
CRUSH rule 2 x 10 [10,7,4]
rule 2 (myrule) num_rep 3 result size == 3: 10/10
因此似乎要从两台主机中选择OSD,但我不确定它总是如此。有人可以确认吗?还是建议实现此目的的更好方法?