Kubernetes节点未就绪,可能是由于无效的内存地址或reconciler.go

时间:2019-02-02 10:50:52

标签: kubernetes

不幸的是,我的实验性两节点Kubernetes 1.13.2进入了第二个节点为NotReady的模式。我已经在两个节点上都尝试过systemctl restart kubelet,但是到目前为止还没有帮助。

主节点上的

journalctl -u kubelet.service以这一行结尾:

reconciler.go:154] Reconciler: start to sync state
第二个节点上的

journalctl -u kubectl.service包含以下行:

server.go:999] Started kubelet
kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
kubelet.go:1412] No api server defined - no node status update will be sent.
fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
status_manager.go:148] Kubernetes client is nil, not starting status manager.
kubelet.go:1829] Starting kubelet main sync loop.
kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful]
server.go:137] Starting to listen on 0.0.0.0:10250
server.go:333] Adding debug handlers to kubelet server.
volume_manager.go:248] Starting Kubelet Volume Manager
desired_state_of_world_populator.go:130] Desired state populator starts to run
container.go:409] Failed to create summary reader for "/system.slice/atop.service": none of the resources are being tracked.
kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet]
kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
cpu_manager.go:155] [cpumanager] starting with none policy
cpu_manager.go:156] [cpumanager] reconciling every 10s
policy_none.go:42] [cpumanager] none policy: Start
kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
reconciler.go:154] Reconciler: start to sync state
runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/workspace/anago-v1.13.2-beta.0.75+cff46ab41ff0bb/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76
/workspace/anago-v1.13.2-beta.0.75+cff46ab41ff0bb/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/workspace/anago-v1.13.2-beta.0.75+cff46ab41ff0bb/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/asm_amd64.s:522
/usr/local/go/src/runtime/panic.go:513
/usr/local/go/src/runtime/panic.go:82
/usr/local/go/src/runtime/signal_unix.go:390
/workspace/anago-v1.13.2-beta.0.75+cff46ab41ff0bb/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:562
/workspace/anago-v1.13.2-beta.0.75+cff46ab41ff0bb/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:599
/workspace/anago-v1.13.2-beta.0.75+cff46ab41ff0bb/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:419
/workspace/anago-v1.13.2-beta.0.75+cff46ab41ff0bb/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:330
/workspace/anago-v1.13.2-beta.0.75+cff46ab41ff0bb/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:155
/workspace/anago-v1.13.2-beta.0.75+cff46ab41ff0bb/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/workspace/anago-v1.13.2-beta.0.75+cff46ab41ff0bb/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/workspace/anago-v1.13.2-beta.0.75+cff46ab41ff0bb/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/workspace/anago-v1.13.2-beta.0.75+cff46ab41ff0bb/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:143
/usr/local/go/src/runtime/asm_amd64.s:1333
kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
...

这是怎么回事,我该如何纠正这种情况?

更新我现在在两个节点上都停止了kubelet.service,然后在两个节点上也都删除了所有docker容器(docker rm)和图像(docker rmi) 。然后,我仅在主服务器上重新启动kubelet.service。 Kubernetes显然会拉入所有自己的图像并再次启动它们,但是现在看到journalctl -u kubelet.service中的多个连接被拒绝错误,例如:

kubelet_node_status.go:94] Unable to register node "my_node" with API server: Post https://my_master:6443/api/v1/nodes: dial tcp my_master:6443: connect: connection refused

kubectl get nodes仍可以正常访问API服务器。因此,这可能更接近我所面临的根本原因。假设此时kubelet试图连接到API服务器是否正确?如何检查相关凭证是否仍然完整?

2 个答案:

答案 0 :(得分:0)

在不了解整个配置的情况下很难进行故障排除。

但是请尝试遵循此检查清单。

答案 1 :(得分:0)

问题仍然无法解释,我不得不用kubeadm reset等重新创建集群。