在kubectl应用-f cluster.yaml(rook GitHub存储库中的Yaml示例文件)后,即使我等待了1个小时,我也只运行了一个pod rook-ceph-mon-a-***。我该如何调查这个问题?
NAME READY STATUS RESTARTS AGE
rook-ceph-mon-a-7ff4fd545-qc2wl 1/1 Running 0 20m
在单个运行的窗格的日志下方
$ kubectl logs rook-ceph-mon-a-7ff4fd545-qc2wl -n rook-ceph
2019-01-14 17:23:40.578 7f725478c140 0 ceph version 13.2.2
***
No filesystems configured
2019-01-14 17:23:40.643 7f723a050700 1 mon.a@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
2019-01-14 17:23:40.643 7f723a050700 0 log_channel(cluster) log [DBG] : fsmap
2019-01-14 17:23:40.645 7f723a050700 0 mon.a@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
2019-01-14 17:23:40.645 7f723a050700 0 mon.a@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
2019-01-14 17:23:40.645 7f723a050700 0 mon.a@0(leader).osd e1 crush map has features 1009089990564790272, adjusting msgr requires
2019-01-14 17:23:40.645 7f723a050700 0 mon.a@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
2019-01-14 17:23:40.643443 mon.a unknown.0 - 0 : [INF] mkfs cb8db53e-2d36-42eb-ab25-2a0918602655
2019-01-14 17:23:40.645 7f723a050700 1 mon.a@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
2019-01-14 17:23:40.647 7f723a050700 0 log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
2019-01-14 17:23:40.648 7f723a050700 0 log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
2019-01-14 17:23:40.635473 mon.a mon.0 10.32.0.43:6790/0 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0)
2019-01-14 17:23:40.641926 mon.a mon.0 10.32.0.43:6790/0 2 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0)
答案 0 :(得分:0)
假设您已遵循rook的Github页面here上的官方ceph-quickstart指南,请首先使用以下命令检查有问题的Pod:
kubectl logs <pod_name>
并使用以下命令从其中检索日志:
{{1}}
请更新您的原始问题以包括这些命令输出。
答案 1 :(得分:0)
也许您的旧数据(/ var / lib / rook)不为空,我遇到了错误,并且删除了这些文件。它正在工作!