仍然有点混淆Ceph粉碎地图如何工作,并希望有人可以放弃一些光。这是我的osd树:
core@store101 ~ $ ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 6.00000 root default
-2 3.00000 datacenter dc1
-4 3.00000 rack rack_dc1
-10 1.00000 host store101
4 1.00000 osd.4 up 1.00000 1.00000
-7 1.00000 host store102
1 1.00000 osd.1 up 1.00000 1.00000
-9 1.00000 host store103
3 1.00000 osd.3 up 1.00000 1.00000
-3 3.00000 datacenter dc2
-5 3.00000 rack rack_dc2
-6 1.00000 host store104
0 1.00000 osd.0 up 1.00000 1.00000
-8 1.00000 host store105
2 1.00000 osd.2 up 1.00000 1.00000
-11 1.00000 host store106
5 1.00000 osd.5 up 1.00000 1.00000
我只是想确保复制值为2或更大,对象的所有副本都不在同一个数据中心。我的规则(取自互联网)是:
rule replicated_ruleset_dc {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step choose firstn 2 type datacenter
step choose firstn 2 type rack
step chooseleaf firstn 0 type host
step emit
}
但是,如果我转储放置组,直接关闭我会看到来自同一数据中心的两个osd。 osd的5,0
core@store101 ~ $ ceph pg dump | grep 5,0
1.73 0 0 0 0 0 0 0 0 active+clean 2015-07-09 13:41:36.939197 0'0 96:113 [5,0] 5 [5,0] 5 0'0 2015-07-09 12:05:01.854945 0'0 2015-07-09 12:05:01.854945
1.70 0 0 0 0 0 0 0 0 active+clean 2015-07-09 13:41:36.947403 0'0 96:45 [5,0] 5 [5,0] 5 0'0 2015-07-09 12:05:01.854941 0'0 2015-07-09 12:05:01.854941
1.6f 0 0 0 0 0 0 0 0 active+clean 2015-07-09 13:41:36.947056 0'0 96:45 [5,0] 5 [5,0] 5 0'0 2015-07-09 12:05:01.854940 0'0 2015-07-09 12:05:01.854940
1.6c 0 0 0 0 0 0 0 0 active+clean 2015-07-09 13:41:36.938591 0'0 96:45 [5,0] 5 [5,0] 5 0'0 2015-07-09 12:05:01.854939 0'0 2015-07-09 12:05:01.854939
1.66 0 0 0 0 0 0 0 0 active+clean 2015-07-09 13:41:36.937803 0'0 96:107 [5,0] 5 [5,0] 5 0'0 2015-07-09 12:05:01.854936 0'0 2015-07-09 12:05:01.854936
1.67 0 0 0 0 0 0 0 0 active+clean 2015-07-09 13:41:36.929323 0'0 96:33 [5,0] 5 [5,0] 5 0'0 2015-07-09 12:05:01.854937 0'0 2015-07-09 12:05:01.854937
1.65 0 0 0 0 0 0 0 0 active+clean 2015-07-09 13:41:36.928200 0'0 96:33 [5,0] 5 [5,0] 5 0'0 2015-07-09 12:05:01.854936 0'0 2015-07-09 12:05:01.854936
1.63 0 0 0 0 0 0 0 0 active+clean 2015-07-09 13:41:36.927642 0'0 96:107 [5,0] 5 [5,0] 5 0'0 2015-07-09 12:05:01.854935 0'0 2015-07-09 12:05:01.854935
1.3f 0 0 0 0 0 0 0 0 active+clean 2015-07-09 13:41:36.924738 0'0 96:33 [5,0] 5 [5,0] 5 0'0 2015-07-09 12:05:01.854920 0'0 2015-07-09 12:05:01.854920
1.36 0 0 0 0 0 0 0 0 active+clean 2015-07-09 13:41:36.917833 0'0 96:45 [5,0] 5 [5,0] 5 0'0 2015-07-09 12:05:01.854916 0'0 2015-07-09 12:05:01.854916
1.33 0 0 0 0 0 0 0 0 active+clean 2015-07-09 13:41:36.911484 0'0 96:104 [5,0] 5 [5,0] 5 0'0 2015-07-09 12:05:01.854915 0'0 2015-07-09 12:05:01.854915
1.2b 0 0 0 0 0 0 0 0 active+clean 2015-07-09 13:41:36.878280 0'0 96:58 [5,0] 5 [5,0] 5 0'0 2015-07-09 12:05:01.854911 0'0 2015-07-09 12:05:01.854911
1.5 0 0 0 0 0 0 0 0 active+clean 2015-07-09 13:41:36.942620 0'0 96:98 [5,0] 5 [5,0] 5 0'0 2015-07-09 12:05:01.854892 0'0 2015-07-09 12:05:01.854892
如何确保在另一个直流电源中至少有一个复制品?
答案 0 :(得分:0)
昨天我改变了我的ceph crush地图:
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 181.99979 root default
-12 90.99989 rack rack1
-2 15.46999 host ceph0
1 3.64000 osd.1 up 1.00000 1.00000
0 3.64000 osd.0 up 1.00000 1.00000
8 2.73000 osd.8 up 1.00000 1.00000
9 2.73000 osd.9 up 1.00000 1.00000
19 2.73000 osd.19 up 1.00000 1.00000
...
-13 90.99989 rack rack2
-3 15.46999 host ceph2
2 3.64000 osd.2 up 1.00000 1.00000
3 3.64000 osd.3 up 1.00000 1.00000
10 2.73000 osd.10 up 1.00000 1.00000
11 2.73000 osd.11 up 1.00000 1.00000
18 2.73000 osd.18 up 1.00000 1.00000
...
rack rack1 {
id -12 # do not change unnecessarily
# weight 91.000
alg straw
hash 0 # rjenkins1
item ceph0 weight 15.470
...
}
rack rack2 {
id -13 # do not change unnecessarily
# weight 91.000
alg straw
hash 0 # rjenkins1
item ceph2 weight 15.470
...
}
root default {
id -1 # do not change unnecessarily
# weight 182.000
alg straw
hash 0 # rjenkins1
item rack1 weight 91.000
item rack2 weight 91.000
}
rule racky {
ruleset 3
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type rack
step emit
}
请显示“root default”部分
试试这个
rule replicated_ruleset_dc {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type datacenter
step emit
}