添加RGW实例时Ceph状态为HEALTH_WARN

时间:2019-01-16 14:46:17

标签: rest deployment ceph

我想创建ceph集群,然后通过S3 RESTful api连接到它。 因此,我在带有3个OSD(每个10Gb HDD一个)的“ Ubuntu 16.04.5 LTS”上部署了ceph集群(模拟13.2.4)。

使用本教程:

1)http://docs.ceph.com/docs/mimic/start/quick-start-preflight/#ceph-deploy-setup

2)http://docs.ceph.com/docs/mimic/start/quick-ceph-deploy/ 此时,ceph状态为OK:

root@ubuntu-srv:/home/slavik/my-cluster# ceph -s
  cluster:
    id:     d7459118-8c16-451d-9774-d09f7a926d0e
    health: HEALTH_OK



  services:
    mon: 1 daemons, quorum ubuntu-srv
    mgr: ubuntu-srv(active)
    osd: 3 osds: 3 up, 3 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   3.0 GiB used, 27 GiB / 30 GiB avail
    pgs:

3)“要使用Ceph的Ceph对象网关组件,必须部署RGW的实例。执行以下操作以创建RGW的新实例:”

root@ubuntu-srv:/home/slavik/my-cluster# ceph-deploy rgw create ubuntu-srv
....
[ceph_deploy.rgw][INFO  ] The Ceph Object Gateway (RGW) is now running on host ubuntu-srv and default port 7480
root@ubuntu-srv:/home/slavik/my-cluster# ceph -s
  cluster:
    id:     d7459118-8c16-451d-9774-d09f7a926d0e
    health: HEALTH_WARN
            too few PGs per OSD (2 < min 30)

  services:
    mon: 1 daemons, quorum ubuntu-srv
    mgr: ubuntu-srv(active)
    osd: 3 osds: 3 up, 3 in

  data:
    pools:   1 pools, 8 pgs
    objects: 0  objects, 0 B
    usage:   3.0 GiB used, 27 GiB / 30 GiB avail
    pgs:     37.500% pgs unknown
             62.500% pgs not active
             5 creating+peering
             3 unknown

Ceph的状态已更改为HEALTH_WARN-为什么以及如何解决?

1 个答案:

答案 0 :(得分:1)

您的问题是

health: HEALTH_WARN
        too few PGs per OSD (2 < min 30)

通过运行以下命令查看当前的pg配置:

ceph osd dump | grep池

查看每个池的pg计数配置了什么,然后转到https://ceph.com/pgcalc/来计算应该为池配置什么。

警告是每个osd的pg数量少,现在每个osd的pg数量少,其中min应该是30