CEPH原始空间使用情况

时间:2015-02-22 14:36:48

标签: ceph

我无法理解,我的ceph原始空间消失了。

cluster 90dc9682-8f2c-4c8e-a589-13898965b974
     health HEALTH_WARN 72 pgs backfill; 26 pgs backfill_toofull; 51 pgs backfilling; 141 pgs stuck unclean; 5 requests are blocked > 32 sec; recovery 450170/8427917 objects degraded (5.341%); 5 near full osd(s)
     monmap e17: 3 mons at {enc18=192.168.100.40:6789/0,enc24=192.168.100.43:6789/0,enc26=192.168.100.44:6789/0}, election epoch 734, quorum 0,1,2 enc18,enc24,enc26
     osdmap e3326: 14 osds: 14 up, 14 in
      pgmap v5461448: 1152 pgs, 3 pools, 15252 GB data, 3831 kobjects
            31109 GB used, 7974 GB / 39084 GB avail
            450170/8427917 objects degraded (5.341%)
                  18 active+remapped+backfill_toofull
                1011 active+clean
                  64 active+remapped+wait_backfill
                   8 active+remapped+wait_backfill+backfill_toofull
                  51 active+remapped+backfilling
recovery io 58806 kB/s, 14 objects/s

OSD树(每个主机有2个OSD):

# id    weight  type name       up/down reweight
-1      36.45   root default
-2      5.44            host enc26
0       2.72                    osd.0   up      1
1       2.72                    osd.1   up      0.8227
-3      3.71            host enc24
2       0.99                    osd.2   up      1
3       2.72                    osd.3   up      1
-4      5.46            host enc22
4       2.73                    osd.4   up      0.8
5       2.73                    osd.5   up      1
-5      5.46            host enc18
6       2.73                    osd.6   up      1
7       2.73                    osd.7   up      1
-6      5.46            host enc20
9       2.73                    osd.9   up      0.8
8       2.73                    osd.8   up      1
-7      0               host enc28
-8      5.46            host archives
12      2.73                    osd.12  up      1
13      2.73                    osd.13  up      1
-9      5.46            host enc27
10      2.73                    osd.10  up      1
11      2.73                    osd.11  up      1

真实用法:

/dev/rbd0        14T  7.9T  5.5T  59% /mnt/ceph

泳池大小:

osd pool default size = 2

池: ceph osd lspools

0 data,1 metadata,2 rbd,

rados df

pool name       category                 KB      objects       clones     degraded      unfound           rd        rd KB           wr        wr KB
data            -                          0            0            0            0           0            0            0            0            0
metadata        -                          0            0            0            0           0            0            0            0            0
rbd             -                15993591918      3923880            0       444545           0        82936      1373339      2711424    849398218
  total used     32631712348      3923880
  total avail     8351008324
  total space    40982720672

原始使用量是实际使用量的4倍。据我了解,它必须是2倍?

2 个答案:

答案 0 :(得分:0)

是的,它必须是2倍。我真的不太感兴趣,真正的原始用量是7.9T。为什么要在映射磁盘上检查此值?

这是我的泳池:


pool name                 KB      objects       clones     degraded      unfound           rd        rd KB           wr        wr KB
admin-pack           7689982         1955            0            0            0       693841      3231750     40068930    353462603
public-cloud       105432663        26561            0            0            0     13001298    638035025    222540884   3740413431
rbdkvm_sata      32624026697      7968550        31783            0            0   4950258575 232374308589  12772302818 278106113879
  total used     98289353680      7997066
  total avail    34474223648
  total space   132763577328

您可以看到,已使用空间的总量是池rbdkvm_sata(+ - )中已用空间的3倍。

ceph -s也显示相同的结果:


pgmap v11303091: 5376 pgs, 3 pools, 31220 GB data, 7809 kobjects
            93736 GB used, 32876 GB / 123 TB avail

答案 1 :(得分:-1)

我不认为你只有一张rbd图片。结果" ceph osd lspools"表示你有3个池,其中一个池有名称"元数据"。(也许你使用的是cephfs)。 / dev / rbd0出现是因为您映射了图像,但您也可以有其他图像。要列出您可以使用的图像" rbd list -p"。您可以使用" rbd info -p"

查看图像信息