运河和代理吊舱卡在kubernetes集群中的Windows节点上的ContainerCreating中

时间:2018-03-07 09:56:29

标签: windows kubernetes kube-proxy

我们正在尝试通过运河吊舱网络管理器向我们基于centos的kubernetes集群添加一个Windows节点。

为此,我们构建了一个Windows Server 1709 VM,并按照本指南https://docs.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/getting-started-kubernetes-windows的说明进行了操作。

事实上,powershell脚本已成功加入群集

NAME    STATUS  ROLES   AGE VERSION EXTERNAL-IP OS-IMAGE        KERNEL-VERSION  CONTAINER-RUNTIME
k8s-node-01 Ready   master  19d v1.9.3  <none>  CentOS Linux 7  (Core)  3.10.0-693.17.1.el7.x86_64  docker://1.12.6
k8s-node-02 Ready   <none>  19d v1.9.3  <none>  CentOS Linux 7  (Core)  3.10.0-693.17.1.el7.x86_64  docker://1.12.6
k8s-node-03 Ready   <none>  19d v1.9.3  <none>  CentOS Linux 7  (Core)  3.10.0-693.17.1.el7.x86_64  docker://1.12.6
k8s-wnode-01    Ready   <none>  17h v1.9.3  <none>  Windows Server  Datacenter  10.0.16299.125  

我们甚至部署了一个基于Windows的示例应用程序及其正在运行的服务。

default       win-webserver-5c4c6df67f-2zllt                  1/1       Running             0          20m       10.244.8.77    k8s-wnode-01
default       win-webserver                  NodePort    10.106.133.105   <none>        80:32415/TCP                                                                                   23h       app=win-webserver

但是节点端口无法访问pod。深入研究这个问题我们发现运河和kube代理吊舱都被卡住了

kube-system   canal-dm7gl                                     3/3       Running             3          15d       172.16.8.102   k8s-node-01
kube-system   canal-jf5b5                                     3/3       Running             4          15d       172.16.8.104   k8s-node-02
kube-system   canal-kd8nw                                     3/3       Running             3          15d       172.16.8.105   k8s-node-03
kube-system   canal-tmxk5                                     0/3       ContainerCreating   0          18h       192.168.0.1    k8s-wnode-01
kube-system   kube-proxy-fmpvf                                1/1       Running             10         19d       172.16.8.102   k8s-node-01
kube-system   kube-proxy-gpb68                                1/1       Running             7          19d       172.16.8.104   k8s-node-02
kube-system   kube-proxy-l7wjv                                1/1       Running             6          19d       172.16.8.105   k8s-node-03
kube-system   kube-proxy-phqr7                                0/1       ContainerCreating   0          18h       192.168.0.1    k8s-wnode-01

并且通过描述这些问题,这些问题似乎是无关的:

$ kubectl describe pod kube-proxy-phqr7 -n kube-system
  Normal   SuccessfulMountVolume  21m                  kubelet, k8s-wnode-01  MountVolume.SetUp succeeded for volume "kube-proxy-token-4cdx4"
  Normal   SuccessfulMountVolume  21m                  kubelet, k8s-wnode-01  MountVolume.SetUp succeeded for volume "lib-modules"
  Normal   SuccessfulMountVolume  21m                  kubelet, k8s-wnode-01  MountVolume.SetUp succeeded for volume "kube-proxy"
  Warning  FailedMount            3m (x17 over 21m)    kubelet, k8s-wnode-01  MountVolume.SetUp failed for volume "xtables-lock" : open /run/xtables.lock: The system cannot find the path specified.
  Warning  FailedMount            1m (x9 over 19m)     kubelet, k8s-wnode-01  Unable to mount volumes for pod "kube-proxy-phqr7_kube-system(6e18e3c8-2154-11e8-827c-000c299d5d24)": timeout expired waiting for volumes to attach/mount for pod "kube-system"/"kube-proxy-phqr7". list of unattached/unmounted volumes=[xtables-lock]

$ kubectl describe pod canal-tmxk5 -n kube-system
  Normal   SuccessfulMountVolume   22m                    kubelet, k8s-wnode-01  MountVolume.SetUp succeeded for volume "run"
  Normal   SuccessfulMountVolume   22m                    kubelet, k8s-wnode-01  MountVolume.SetUp succeeded for volume "canal-token-9twgx"
  Normal   SuccessfulMountVolume   22m                    kubelet, k8s-wnode-01  MountVolume.SetUp succeeded for volume "lib-modules"
  Normal   SuccessfulMountVolume   22m                    kubelet, k8s-wnode-01  MountVolume.SetUp succeeded for volume "cni-bin-dir"
  Normal   SuccessfulMountVolume   22m                    kubelet, k8s-wnode-01  MountVolume.SetUp succeeded for volume "cni-net-dir"
  Normal   SuccessfulMountVolume   22m                    kubelet, k8s-wnode-01  MountVolume.SetUp succeeded for volume "flannel-cfg"
  Normal   SandboxChanged          22m (x9 over 22m)      kubelet, k8s-wnode-01  Pod sandbox changed, it will be killed and re-created.
  Warning  FailedCreatePodSandBox  2m (x311 over 22m)     kubelet, k8s-wnode-01  Failed create pod sandbox.

什么是xtables-lock以及为什么Windows节点在代理的密钥卷中缺少此文件?

为什么pod沙箱(以及它是什么?)无法为运河创建,我应该在哪里寻找更多信息?

windows kubernetes节点的文档确实缺乏,我不知道在哪里看,因为所有google结果都是关于linux节点的,我似乎无法找到一种方法在Windows上应用建议的修复,因为它是完全不同的环境。

以下是来自Windows节点上的kubelet控制台的日志转储

E0307 11:03:32.011134   80996 kubelet.go:1624] Unable to mount volumes for pod "kube-proxy-phqr7_kube-system(6e18e3c8-2154-11e8-827c-000c299d5d24)": timeout expired waiting for volumes to attach/mount for pod "kube-system"/"kube-proxy-phqr7". list of unattached/unmounted volumes=[xtables-lock]; skipping pod
E0307 11:03:32.011134   80996 pod_workers.go:186] Error syncing pod 6e18e3c8-2154-11e8-827c-000c299d5d24 ("kube-proxy-phqr7_kube-system(6e18e3c8-2154-11e8-827c-000c299d5d24)"), skipping: timeout expired waiting for volumes to attach/mount for pod "kube-system"/"kube-proxy-phqr7". list of unattached/unmounted volumes=[xtables-lock]
I0307 11:03:32.011134   80996 server.go:231] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-phqr7", UID:"6e18e3c8-2154-11e8-827c-000c299d5d24", APIVersion:"v1", ResourceVersion:"2241119", FieldPath:""}): type: 'Warning' reason: 'FailedMount' Unable to mount volumes for pod "kube-proxy-phqr7_kube-system(6e18e3c8-2154-11e8-827c-000c299d5d24)": timeout expired waiting for volumes to attach/mount for pod "kube-system"/"kube-proxy-phqr7". list of unattached/unmounted volumes=[xtables-lock]
...
I0307 11:03:32.633168   80996 kuberuntime_manager.go:853] getSandboxIDByPodUID got sandbox IDs ["590cac5a4ba9ec641835823eab19250a8d7984d3ba95da3c79af486f021d2161" "fb9dd26c3f6f26034aec38d2a82efe063ab30e0316323d7514556d8e74455b5d" "5b7de8875db3942b2b0d7538c0b5204c55fa405f9835995e68a15886f0c9e149"] for pod "canal-tmxk5_kube-system(6e16e04d-2154-11e8-827c-000c299d5d24)"
I0307 11:03:32.640170   80996 generic.go:380] PLEG: Write status for canal-tmxk5/kube-system: &container.PodStatus{ID:"6e16e04d-2154-11e8-827c-000c299d5d24", Name:"canal-tmxk5", Namespace:"kube-system", IP:"", ContainerStatuses:[]*container.ContainerStatus{}, SandboxStatuses:[]*runtime.PodSandboxStatus{(*runtime.PodSandboxStatus)(0xc042a334f0), (*runtime.PodSandboxStatus)(0xc042a337c0), (*runtime.PodSandboxStatus)(0xc042a33ae0)}} (err: <nil>)
I0307 11:03:32.644184   80996 kubelet.go:1880] SyncLoop (PLEG): "canal-tmxk5_kube-system(6e16e04d-2154-11e8-827c-000c299d5d24)", event: &pleg.PodLifecycleEvent{ID:"6e16e04d-2154-11e8-827c-000c299d5d24", Type:"ContainerDied", Data:"590cac5a4ba9ec641835823eab19250a8d7984d3ba95da3c79af486f021d2161"}
I0307 11:03:32.644184   80996 kubelet_pods.go:1349] Generating status for "canal-tmxk5_kube-system(6e16e04d-2154-11e8-827c-000c299d5d24)"
I0307 11:03:32.645170   80996 kubelet_pods.go:1314] pod waiting > 0, pending
W0307 11:03:32.645170   80996 pod_container_deletor.go:77] Container "590cac5a4ba9ec641835823eab19250a8d7984d3ba95da3c79af486f021d2161" not found in pod's containers
I0307 11:03:32.645170   80996 kubelet_pods.go:1349] Generating status for "canal-tmxk5_kube-system(6e16e04d-2154-11e8-827c-000c299d5d24)"
I0307 11:03:32.645170   80996 kubelet_pods.go:1314] pod waiting > 0, pending
I0307 11:03:32.645170   80996 status_manager.go:353] Ignoring same status for pod "canal-tmxk5_kube-system(6e16e04d-2154-11e8-827c-000c299d5d24)", status: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-03-06 16:39:31 +0100 CET Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-03-06 16:39:31 +0100 CET Reason:ContainersNotReady Message:containers with unready status: [calico-node install-cni kube-flannel]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2018-03-06 16:41:18 +0100 CET Reason: Message:}] Message: Reason: HostIP:192.168.0.1 PodIP:192.168.0.1 StartTime:2018-03-06 16:39:31 +0100 CET InitContainerStatuses:[] ContainerStatuses:[{Name:calico-node State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:quay.io/calico/node:v2.6.7 ImageID: ContainerID:} {Name:install-cni State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:quay.io/calico/cni:v1.11.2 ImageID: ContainerID:} {Name:kube-flannel State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:quay.io/coreos/flannel:v0.9.1 ImageID: ContainerID:}] QOSClass:Burstable}
I0307 11:03:32.651168   80996 volume_manager.go:342] Waiting for volumes to attach and mount for pod "canal-tmxk5_kube-system(6e16e04d-2154-11e8-827c-000c299d5d24)"
I0307 11:03:32.657170   80996 kubelet.go:1263] Container garbage collection succeeded
I0307 11:03:32.697183   80996 volume_host.go:218] using default mounter/exec for kubernetes.io/configmap
I0307 11:03:32.710179   80996 reconciler.go:264] operationExecutor.MountVolume started for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/6e16e04d-2154-11e8-827c-000c299d5d24-flannel-cfg") pod "canal-tmxk5" (UID: "6e16e04d-2154-11e8-827c-000c299d5d24") Volume is already mounted to pod, but remount was requested.
I0307 11:03:32.710179   80996 volume_host.go:218] using default mounter/exec for kubernetes.io/secret
I0307 11:03:32.710179   80996 reconciler.go:264] operationExecutor.MountVolume started for volume "canal-token-9twgx" (UniqueName: "kubernetes.io/secret/6e16e04d-2154-11e8-827c-000c299d5d24-canal-token-9twgx") pod "canal-tmxk5" (UID: "6e16e04d-2154-11e8-827c-000c299d5d24") Volume is already mounted to pod, but remount was requested.
I0307 11:03:32.711174   80996 volume_host.go:218] using default mounter/exec for kubernetes.io/host-path
I0307 11:03:32.711174   80996 secret.go:186] Setting up volume canal-token-9twgx for pod 6e16e04d-2154-11e8-827c-000c299d5d24 at c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~secret\canal-token-9twgx
I0307 11:03:32.711174   80996 volume_host.go:218] using default mounter/exec for kubernetes.io/empty-dir
I0307 11:03:32.711174   80996 volume_host.go:218] using default mounter/exec for kubernetes.io/empty-dir
I0307 11:03:32.712174   80996 empty_dir.go:264] pod 6e16e04d-2154-11e8-827c-000c299d5d24: mounting tmpfs for volume wrapped_canal-token-9twgx
I0307 11:03:32.710179   80996 configmap.go:187] Setting up volume flannel-cfg for pod 6e16e04d-2154-11e8-827c-000c299d5d24 at c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~configmap\flannel-cfg
I0307 11:03:32.713173   80996 mount_windows.go:55] azureMount: mounting source ("tmpfs"), target ("c:\\var\\lib\\kubelet\\pods\\6e16e04d-2154-11e8-827c-000c299d5d24\\volumes\\kubernetes.io~secret\\canal-token-9twgx"), with options ([])
I0307 11:03:32.713173   80996 volume_host.go:218] using default mounter/exec for kubernetes.io/empty-dir
I0307 11:03:32.715190   80996 volume_host.go:218] using default mounter/exec for kubernetes.io/empty-dir
I0307 11:03:32.716175   80996 round_trippers.go:436] GET https://172.16.8.102:6443/api/v1/namespaces/kube-system/secrets/canal-token-9twgx?resourceVersion=0 200 OK in 1 milliseconds
I0307 11:03:32.717180   80996 secret.go:213] Received secret kube-system/canal-token-9twgx containing (3) pieces of data, 1884 total bytes
I0307 11:03:32.718174   80996 round_trippers.go:436] GET https://172.16.8.102:6443/api/v1/namespaces/kube-system/configmaps/canal-config?resourceVersion=0 200 OK in 1 milliseconds
I0307 11:03:32.718174   80996 atomic_writer.go:332] c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~secret\canal-token-9twgx: current paths:   [c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~secret\canal-token-9twgx\..2018_03_07_10_03_27.050789875\ca.crt c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~secret\canal-token-9twgx\..2018_03_07_10_03_27.050789875\namespace c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~secret\canal-token-9twgx\..2018_03_07_10_03_27.050789875\token]
I0307 11:03:32.718174   80996 atomic_writer.go:344] c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~secret\canal-token-9twgx: new paths:       [ca.crt namespace token]
I0307 11:03:32.719173   80996 atomic_writer.go:347] c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~secret\canal-token-9twgx: paths to remove: map[c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~secret\canal-token-9twgx\..2018_03_07_10_03_27.050789875\token:{} c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~secret\canal-token-9twgx\..2018_03_07_10_03_27.050789875\ca.crt:{} c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~secret\canal-token-9twgx\..2018_03_07_10_03_27.050789875\namespace:{}]
I0307 11:03:32.726175   80996 atomic_writer.go:159] pod kube-system/canal-tmxk5 volume canal-token-9twgx: write required for target directory c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~secret\canal-token-9twgx
I0307 11:03:32.734177   80996 atomic_writer.go:176] pod kube-system/canal-tmxk5 volume canal-token-9twgx: performed write of new data to ts data directory: c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~secret\canal-token-9twgx\..2018_03_07_10_03_32.145900189
I0307 11:03:32.727175   80996 configmap.go:214] Received configMap kube-system/canal-config containing (4) pieces of data, 911 total bytes
I0307 11:03:32.798178   80996 atomic_writer.go:332] c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~configmap\flannel-cfg: current paths:   [c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~configmap\flannel-cfg\..2018_03_07_10_03_27.611158500\canal_iface c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~configmap\flannel-cfg\..2018_03_07_10_03_27.611158500\cni_network_config c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~configmap\flannel-cfg\..2018_03_07_10_03_27.611158500\masquerade c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~configmap\flannel-cfg\..2018_03_07_10_03_27.611158500\net-conf.json]
I0307 11:03:32.798178   80996 atomic_writer.go:344] c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~configmap\flannel-cfg: new paths:       [canal_iface cni_network_config masquerade net-conf.json]
I0307 11:03:32.798178   80996 atomic_writer.go:347] c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~configmap\flannel-cfg: paths to remove: map[c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~configmap\flannel-cfg\..2018_03_07_10_03_27.611158500\masquerade:{} c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~configmap\flannel-cfg\..2018_03_07_10_03_27.611158500\net-conf.json:{} c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~configmap\flannel-cfg\..2018_03_07_10_03_27.611158500\canal_iface:{} c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~configmap\flannel-cfg\..2018_03_07_10_03_27.611158500\cni_network_config:{}]
I0307 11:03:32.799180   80996 atomic_writer.go:159] pod kube-system/canal-tmxk5 volume flannel-cfg: write required for target directory c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~configmap\flannel-cfg
I0307 11:03:32.811187   80996 volume_host.go:218] using default mounter/exec for kubernetes.io/configmap
I0307 11:03:32.812179   80996 volume_host.go:218] using default mounter/exec for kubernetes.io/host-path
I0307 11:03:32.835183   80996 atomic_writer.go:176] pod kube-system/canal-tmxk5 volume flannel-cfg: performed write of new data to ts data directory: c:\var\lib\kubelet\pods\6e16e04d-2154-11e8-827c-000c299d5d24\volumes\kubernetes.io~configmap\flannel-cfg\..2018_03_07_10_03_32.269248344
I0307 11:03:32.912190   80996 volume_host.go:218] using default mounter/exec for kubernetes.io/host-path
I0307 11:03:32.956200   80996 volume_manager.go:371] All volumes are attached and mounted for pod "canal-tmxk5_kube-system(6e16e04d-2154-11e8-827c-000c299d5d24)"
I0307 11:03:32.956200   80996 kuberuntime_manager.go:442] Syncing Pod "canal-tmxk5_kube-system(6e16e04d-2154-11e8-827c-000c299d5d24)": &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:canal-tmxk5,GenerateName:canal-,Namespace:kube-system,SelfLink:/api/v1/namespaces/kube-system/pods/canal-tmxk5,UID:6e16e04d-2154-11e8-827c-000c299d5d24,ResourceVersion:2241118,Generation:0,CreationTimestamp:2018-03-06 16:38:34 +0100 CET,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{controller-revision-hash: 1120593895,k8s-app: canal,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2018-03-07T10:28:11.9157574+01:00,kubernetes.io/config.source: api,scheduler.alpha.kubernetes.io/critical-pod: ,},OwnerReferences:[{extensions/v1beta1 DaemonSet canal b747d502-1614-11e8-931d-000c299d5d24 0xc042d93dd8 0xc042d93dd9}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{lib-modules {HostPathVolumeSource{Path:/lib/modules,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {var-run-calico {&HostPathVolumeSource{Path:/var/run/calico,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {cni-bin-dir {&HostPathVolumeSource{Path:/opt/cni/bin,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {cni-net-dir {&HostPathVolumeSource{Path:/etc/cni/net.d,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {run {&HostPathVolumeSource{Path:/run,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {flannel-cfg {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:canal-config,},Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil}} {canal-token-9twgx {nil nil nil nil nil &SecretVolumeSource{SecretName:canal-token-9twgx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{calico-node quay.io/calico/node:v2.6.7 [] []  [] [] [{DATASTORE_TYPE kubernetes nil} {FELIX_LOGSEVERITYSYS info nil} {CALICO_NETWORKING_BACKEND none nil} {CLUSTER_TYPE k8s,canal nil} {CALICO_DISABLE_FILE_LOGGING true nil} {FELIX_IPTABLESREFRESHINTERVAL 60 nil} {FELIX_IPV6SUPPORT false nil} {WAIT_FOR_DATASTORE true nil} {IP  nil} {NODENAME  EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {FELIX_DEFAULTENDPOINTTOHOSTACTION ACCEPT nil} {FELIX_HEALTHENABLED true nil}] {map[] map[cpu:{{250 -3} {<nil>} 250m DecimalSI}]} [{lib-modules true /lib/modules  <nil>} {var-run-calico false /var/run/calico  <nil>} {canal-token-9twgx true /var/run/secrets/kubernetes.io/serviceaccount  <nil>}] [] &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:9099,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readiness,Port:9099,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} false false false} {install-cni quay.io/calico/cni:v1.11.2 [/install-cni.sh] []  [] [] [{CNI_CONF_NAME 10-calico.conflist nil} {CNI_NETWORK_CONFIG  &EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:canal-config,},Key:cni_network_config,Optional:nil,},SecretKeyRef:nil,}} {KUBERNETES_NODE_NAME  &EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] {map[] map[]} [{cni-bin-dir false /host/opt/cni/bin  <nil>} {cni-net-dir false /host/etc/cni/net.d  <nil>} {canal-token-9twgx true /var/run/secrets/kubernetes.io/serviceaccount  <nil>}] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false} {kube-flannel quay.io/coreos/flannel:v0.9.1 [/opt/bin/flanneld --ip-masq --kube-subnet-mgr] []  [] [] [{POD_NAME  &EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {POD_NAMESPACE  &EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {FLANNELD_IFACE  &EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:canal-config,},Key:canal_iface,Optional:nil,},SecretKeyRef:nil,}} {FLANNELD_IP_MASQ  &EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:canal-config,},Key:masquerade,Optional:nil,},SecretKeyRef:nil,}}] {map[] map[]} [{run false /run  <nil>} {flannel-cfg false /etc/kube-flannel/  <nil>} {canal-token-9twgx true /var/run/secrets/kubernetes.io/serviceaccount  <nil>}] [] nil nil nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:canal,DeprecatedServiceAccount:canal,NodeName:k8s-wnode-01,HostNetwork:true,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{ Exists  NoSchedule <nil>} {CriticalAddonsOnly Exists   <nil>} { Exists  NoExecute <nil>} {node.kubernetes.io/not-ready Exists  NoExecute <nil>} {node.kubernetes.io/unreachable Exists  NoExecute <nil>} {node.kubernetes.io/disk-pressure Exists  NoSchedule <nil>} {node.kubernetes.io/memory-pressure Exists  NoSchedule <nil>}],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-03-06 16:39:31 +0100 CET  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-03-06 16:39:31 +0100 CET ContainersNotReady containers with unready status: [calico-node install-cni kube-flannel]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-03-06 16:41:18 +0100 CET  }],Message:,Reason:,HostIP:192.168.0.1,PodIP:192.168.0.1,StartTime:2018-03-06 16:39:31 +0100 CET,ContainerStatuses:[{calico-node {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 quay.io/calico/node:v2.6.7  } {install-cni {&ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 quay.io/calico/cni:v1.11.2  } {kube-flannel {&ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 quay.io/coreos/flannel:v0.9.1  }],QOSClass:Burstable,InitContainerStatuses:[],},}
I0307 11:03:32.958189   80996 kuberuntime_manager.go:403] No ready sandbox for pod "canal-tmxk5_kube-system(6e16e04d-2154-11e8-827c-000c299d5d24)" can be found. Need to start a new one
I0307 11:03:32.958189   80996 kuberuntime_manager.go:571] computePodActions got {KillPod:true CreateSandbox:true SandboxID:590cac5a4ba9ec641835823eab19250a8d7984d3ba95da3c79af486f021d2161 Attempt:518 NextInitContainerToStart:nil ContainersToStart:[0 1 2] ContainersToKill:map[]} for pod "canal-tmxk5_kube-system(6e16e04d-2154-11e8-827c-000c299d5d24)"
I0307 11:03:32.959195   80996 kuberuntime_manager.go:589] Stopping PodSandbox for "canal-tmxk5_kube-system(6e16e04d-2154-11e8-827c-000c299d5d24)", will start new one
I0307 11:03:32.959195   80996 server.go:231] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"canal-tmxk5", UID:"6e16e04d-2154-11e8-827c-000c299d5d24", APIVersion:"v1", ResourceVersion:"2241118", FieldPath:""}): type: 'Normal' reason: 'SandboxChanged' Pod sandbox changed, it will be killed and re-created.

0 个答案:

没有答案
相关问题