Ceph Luminous,我想念什么?

时间:2020-04-11 23:00:34

标签: ceph

在上一个Jewel版本中,我没有任何问题。 我创建了一个包含5个虚拟机的测试集群,所有集群均使用Centos 7 以及Nautilus发行的Ceph。 1个虚拟机是一个监视器,3个是OSD,1个 是admin-mgr。 群集的部署可以,运行状况也可以,但是在创建MDS和池之后...

ceph -s
  cluster:
    id:     87c90336-38bc-4ec2-bcde-2629e1e7b12f
    health: HEALTH_WARN
            Reduced data availability: 42 pgs inactive, 43 pgs peering

  services:
    mon: 1 daemons, quorum ceph1-mon (age 8m)
    mgr: ceph1-admin(active, since 8m)
    mds: cephfs:1 {0=ceph1-osd=up:active} 1 up:standby
    osd: 3 osds: 3 up (since 7m), 3 in (since 20h)

  data:
    pools:   2 pools, 128 pgs
    objects: 18 objects, 2.6 KiB
    usage:   2.1 GiB used, 78 GiB / 80 GiB avail
    pgs:     32.812% pgs unknown
             67.188% pgs not active
             86 peering
             42 unknown

检查健康状况。

ceph health detail 
HEALTH_WARN Reduced data availability: 42 pgs inactive, 43 pgs peering
PG_AVAILABILITY Reduced data availability: 42 pgs inactive, 43 pgs peering
    pg 9.0 is stuck peering for 254.671721, current state peering, last acting [0,1,2]
    pg 9.1 is stuck peering for 254.671732, current state peering, last acting [0,2,1]
    pg 9.4 is stuck peering for 254.670850, current state peering, last acting [0,1,2]
    pg 9.5 is stuck inactive for 234.575775, current state unknown, last acting []
    pg 9.7 is stuck inactive for 234.575775, current state unknown, last acting []
    pg 9.8 is stuck inactive for 234.575775, current state unknown, last acting []

输出非常长。许多PG不活跃或处于对等状态。 我使用了以下配置:

#ceph.conf
[global]
fsid = 87c90336-38bc-4ec2-bcde-2629e1e7b12f
mon_initial_members = ceph1-mon
mon_host = 10.2.0.117
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
mon_allow_pool_delete = true
mon_max_pg_per_osd = 128
osd max pg per osd hard ratio = 10 # < default is 2, try to set at least 5. It will be

我已经使用以下命令创建了OSD:

ceph-deploy --overwrite-conf osd create --data /dev/vdb ceph1-osd
ceph-deploy --overwrite-conf osd create --data /dev/vdb ceph2-osd
ceph-deploy --overwrite-conf osd create --data /dev/vdb ceph3-osd

我已经使用以下命令创建了MDS:

ceph-deploy mds create ceph1-osd
ceph-deploy mds create ceph2-osd
ceph-deploy mds create ceph3-osd

对于池和文件系统,我使用了以下命令:

ceph osd pool create cephfs_data 64
ceph osd pool create cephfs_metadata 64
ceph fs new cephfs cephfs_metadata cephfs_data

怎么了?

1 个答案:

答案 0 :(得分:0)

在大多数情况下,此类对等/未知PG与连接问题有关。显示器和OSD是否可以互相接触?可能是防火墙问题或引起问题的路由错误吗?

此外,OSD和监视器日志也值得检查。日志中是否有错误(最肯定的是)?

检查所有这些内容将指导您解决问题。

另请参阅Ceph troubleshooting guide