kubelet没有注册api-server

时间:2017-12-12 15:47:08

标签: kubernetes

我正在尝试通过手动运行api-server,etcd,flanneld,kubelet和其他必要组件来手动安装设置kubernetes集群。

点击此链接: https://icicimov.github.io/blog/kubernetes/Kubernetes-cluster-step-by-step-Part3/

运行kubelet后,我在日志中重复收到以下消息。

  

设置节点注释以启用音量控制器附加/分离

kubeconfig

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM5ekNDQWQrZ0F3SUJBZ0lKQUljc1NyR1g4ejE0TUEwR0NTcUdTSWIzRFFFQkN3VUFNQkl4RURBT0JnTlYKQkFNTUIydDFZbVV0WTJFd0hoY05NVGN4TWpFeU1UTTFOalUxV2hjTk5EVXdOREk1TVRNMU5qVTFXakFTTVJBdwpEZ1lEVlFRRERBZHJkV0psTFdOaE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBCnZsMTRrWkJaRklYUnFkYk5BUThnTkxFaWJ5ajBVVEkzVURIbUYxWUpXemlNMXYzVnRyaXlRcWZWTnB1dk5RbGsKRkRCWVlCSXpSaU8vcWpnOTV1ZnRhZHF4RzBkWVhFdWluYXRUcWdGaE9iSGlrOFdTa3JBY0lVUlNDTThTdTlwSQpnb0lleFgwSnRTMit6bU5kSTBzQkhNUUtVVXNGNjRUUEJ6d2djM0o2V3J5V1graXU2WHVRN1ZFVEtsR25IY1NqCjdTQnViL1VocUdDWk5OYjh3d1kwUTdhQUtQN3ZsMFRHQ1Z1ckV3Q3FhNGErVVVsbkttQk1ESDdqdjM2Uzdab04KNVFlMzZSQVRkbjM4SlpjLzA1cGVkMlNNVWxNVElycEZPRkpFTnpwNGVTWTZCMUFwVDUxSVo5UGVlcnBLMjFCWQpKclhCSGxUaHE2NFN1MlpkUWZ5bjh3SURBUUFCbzFBd1RqQWRCZ05WSFE0RUZnUVU0dEc5RUkxSDdwYU93SzFGCldjM2dqdEt3TE9vd0h3WURWUjBqQkJnd0ZvQVU0dEc5RUkxSDdwYU93SzFGV2MzZ2p0S3dMT293REFZRFZSMFQKQkFVd0F3RUIvekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBaHZjUG1GSXlManQ3TWI4cHI5MlMwdHVkN1FXVgorZHpKTjZuNDY2b2lyWnVqWFpBWi9CUjVBdFl2cERYWWVRK3FBY29PM2Jrc3BBMENERGJiOE5qVUtOSGRobS9uCmhMcjhSZmlUVDlUMUdaa29vbnNrVGlzQlp5Y3NLNDNvRFJqanBueTVSbW5DTU5JUkhxQXl0Uml1bFNXRnRpbU0Kemd6KzJMdW1BUGFuQ044L1RCSnR3aXhHaVR6aFMySFFNOGFYRzlQRmxsbjdaNXhIdWNkQmRUbm1TeVhKdndUKwpLc28xVFRvMnZEbjhNN2Q2M0hmU3I5RU51UVZ4WEkwbFhuTitkcnNRNi90cmh4aEN0QXRvWERsWXh2eHg3WWhsCkFwM0JiSGxaTzBMR1pIeGUyYkxRUGxLcS9OaUxXY28xS05CdUFES0dJditEVGhUUnVjZFdZcm9jY2c9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://k8s-api.virtual.local
  name: k8s.virtual.local
contexts:
- context:
    cluster: k8s.virtual.local
    user: kubelet
  name: k8s.virtual.local
current-context: ""
kind: Config
preferences: {}
users:
- name: kubelet
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNvRENDQVlnQ0NRQ21xbVJaOUFuK2FUQU5CZ2txaGtpRzl3MEJBUXNGQURBU01SQXdEZ1lEVlFRRERBZHIKZFdKbExXTmhNQjRYRFRFM01USXhNakV6TlRjek1sb1hEVE0zTURneU9URXpOVGN6TWxvd0VqRVFNQTRHQTFVRQpBd3dIYTNWaVpXeGxkRENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFNWS9CSnZtCmF0YVJ6b282aEkxbTZScEpqT0gramFvdm91aG41dmJPdVEvemttQmRLN0FUNXBTN1IyRklrQm9ucWNEcTRqMWIKSThmMW5raloyY0Vnc1J1bFkxZGtTZkRYb2xZNjBUYWsyUjI4TkZHYTU0d0c3T08xY0pXRlk5M0dmQkZIVzZTOApqMDVJSjRDVmNCRWtoR0xWQloraHpYamp5TXJtdmRnOHBCWWQramRnU1MvYTcyVGpzUHlmT0FJYkMyZG1HbEtLCklvRW41dHJ1bTd3alZFaUp4OFFIM3JYK2Y4TDd0WDIwN3RBQXNVYXBRbVNScTcrd09IZDh5QTBHWjVKZzF4cmMKdGJrZGk1NUlZQVIrREIxTGJ0RHNKekx4MFdRd0RadTdtMXFiTlBMZDM1YlNlR2c5bHNUY0pwdGZSUlV0VDJLTwpGK0V2eWd1dC9Fa2hDVDhDQXdFQUFUQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFIYjdwZ2lSVjNVTU0wa0ozClRSV2VCbktaQU1XMitXemU5Q0xLbERFUXhVS1JiNkFZNjZJTzBmNW4yNlZXelByMFdMcVhSTHFic2JJNzFGenYKbW5kdDZzSVJNT0hRcy83dXRHMENRaUNMeEVvYjRreFBwYlI5SEVpOFhoNlBaRm0zaWdUNkNFV1BodzEwWU1sVApBbk15a1FKT0Q5WFFnOXhpR04xbjBwMVNRTWxpa0pzSTUyeGJMSlpqRVE2RjBxZyszL1dsNS9CYklSeVhZSENICmdZbDQwT0oxWG9MWlFrbkJLT29OUkJueEcyd2htdkVXNkszR3B2aVpGZ3Q1cjBQSEdKTnhzZjdscllnc0dFR1UKMjZaTzFhWXZQdGJSMGYzMTMydnkvNUtBTjlGWWlYWkRCc002Q256UFZ6Uy9CYUs2eFRxaDJnRzQ1S1lKYnVyZAo1MHRUUFE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcGdJQkFBS0NBUUVBeGo4RW0rWnExcEhPaWpxRWpXYnBHa21NNGY2TnFpK2k2R2ZtOXM2NUQvT1NZRjByCnNCUG1sTHRIWVVpUUdpZXB3T3JpUFZzangvV2VTTm5ad1NDeEc2VmpWMlJKOE5laVZqclJOcVRaSGJ3MFVacm4KakFiczQ3VndsWVZqM2NaOEVVZGJwTHlQVGtnbmdKVndFU1NFWXRVRm42SE5lT1BJeXVhOTJEeWtGaDM2TjJCSgpMOXJ2Wk9Pdy9KODRBaHNMWjJZYVVvb2lnU2ZtMnU2YnZDTlVTSW5IeEFmZXRmNS93dnUxZmJUdTBBQ3hScWxDClpKR3J2N0E0ZDN6SURRWm5rbURYR3R5MXVSMkxua2hnQkg0TUhVdHUwT3duTXZIUlpEQU5tN3ViV3BzMDh0M2YKbHRKNGFEMld4TndtbTE5RkZTMVBZbzRYNFMvS0M2MzhTU0VKUHdJREFRQUJBb0lCQVFDcUdoNXhTcGMzZnlweQpidDJYbXNxK2xJZCt6blZ0cHF3b3NDWjhkVXBUaHBKOWZ0UmlvK0RBazZVZXN5MTZVN2dUWVRjNG9FQW1iZmtmCjExVkJvalIxWFViTkVLOWxLUkVRM2l6dnJ5amdtOEZrbC82L3BwMlNrUGVHUkVzNVd2clB0S1BNeVVKSlVCNGMKOVp5UUNQNVM4eWQ5SGs5NHdESms3dkhNWGRRSmFMTERvSDlCVWlXQ2ZxaHFGeW5laU8ySDlFM3p3NE90Z1k3dQpKMXdndU1LSHdvSmFCMC9mQUNubDRWRFBMVWwwcGVHWXBzd3lEcTF6dUxTdER0L3kwMmpHSXRPeGJwWmZNd3RWCkhEVXFqMUlka0Foc2RqZkRaOGtXREkrTjJ0Y2syQjVWMXBtM2FwNXFZbTNNVVBlYnB4L1BaZ2VIdk13YnFZQkkKWGtFYStPUVpBb0dCQU9XRkxETXdpT1ptajZnRWQvT1JTNWMxR1pXWmNESWF1VlYvZjJIdEUzemNZakowWjZLUgp5SHJHZjIyMHl3OVNxRzdIS3RVUktiLzVuM2dUMlRCYnlFYjVGbDd2R1VMSG9zbTJxUzY4SERjK2c0NzVMa1lrCitnU0RYYTFwWTRLVEliWGtKQXpNNFplRkdLUXhScHY3eUhocUpDQjZkUXNSV1NId25LbHI1TFhyQW9HQkFOMGUKTEhGZDJMR2tXUHZucExocy9lT1MyQ0FjM3hyeWxFS3FDQ2JVUXVhcW1LcWd2VFpEUkphQWM3V2hLU3I3MjFvZwpwNWRoNFAzMzJjaHhrbG90QzhLc2lCSGt3dmpORC9IUFZQVlpyZXhiWjRlNCtSeEo1MHpEUzhQY3c4SXhQSjEyCnVreURYZU4yeHhNUFJaQlIvajVZaTZobml4RmFhS3hpdXdCM3U4RDlBb0dCQUw5bzYyNlpXR0pGUUNMUDd6VTYKZzc3TGN0VzNDOEZOVmlpK1ZuNVZWMzQyME5IaEVCaWMyWVBDakx6eUhMSmZyY1lNNVdTaGxwN2FUNnExYXRpUQpncHJsMmtrN3YyWlkxU0xCNlovbkV1VGpocFd5cTJ1bUpMZWswbmZ2UHlURERVY0N4eW5CcDVWVVV6T0RRSzZQCk1TVnk1MFFLdkJlSjFUcWZ6aGJndXZFWEFvR0JBSnpuc1cvTXdWemxHNE85ajZTVEt1SlhMR2cxTkpnaHROVk4KWkxWdy8veEE2RTZEKzJCTEFadXVrTzA4N0VLbEw3Vlg0TFRLYnVhby92QitydlN3YkZ6N0l4OVhib2N3dEhUSgp4Q2JLT1dHMFJ0WUhpelhvdDJwQVZ6NG9KUDFqQlBsVDY4VXBudkV2TXZxeVpwR1ByVk0rYi9QVGJkcWxoZ3QxCmorODRCNUpkQW9HQkFOQnZ4TXpzNkpkTERRTm1NRnJkdUNEYW5RRUM5UWxhRmMxZ245THFZNENlTWljUXlocmgKbGhmQXhvYlN3N0FiQzB3MmZpMzZvY1NHSlR2WnJDOGZ0Z3c4K1lpQ0hSL21EdDF0MFRkajFub0FjU2hTU2hzTwo1K3M5Z3JjRWVQWWdCSXROUjRvUWhoWThiTWN0Z2I2aW9hUERUTXh5ZWNrRnZ0NEYrSmg2N2tucAotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
    token: Vboh25RdODgE88S6AOw0ymBUyRPqBJtY

kubelet.service的日志

Dec 13 09:09:34 k8s02 systemd[1]: Started Kubernetes Kubelet Server.
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.007052    1870 feature_gate.go:144] feature gates: map[]
Dec 13 09:09:35 k8s02 kubelet[1870]: W1213 03:39:35.007192    1870 server.go:469] No API client: no api servers specified
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.173220    1870 docker.go:364] Connecting to docker on unix:///var/run/docker.sock
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.173252    1870 docker.go:384] Start docker client with request timeout=2m0s
Dec 13 09:09:35 k8s02 kubelet[1870]: W1213 03:39:35.193198    1870 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.205900    1870 manager.go:143] cAdvisor running in container: "/system.slice/kubelet.service"
Dec 13 09:09:35 k8s02 kubelet[1870]: W1213 03:39:35.226706    1870 manager.go:151] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.245943    1870 fs.go:117] Filesystem partitions: map[/dev/sda1:{mountpoint:/var/lib/docker/aufs major:8 minor:1 fsType:ext4 blockSize:0}]
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.247705    1870 manager.go:198] Machine: {NumCores:2 CpuFrequency:2194918 MemoryCapacity:2097295360 MachineID:89ca704fa3ad49d0a93236a30defaf56 SystemUUID:43DB5E50-1429-46A7-8144-7FAA6F9DEE5E BootID:ed517130-f312-4e2f-a9ad-0ae735ba910b Filesystems:[{Device:/dev/sda1 Capacity:10340831232 Type:vfs Inodes:1280000 HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:10737418240 Scheduler:deadline} 8:16:{Name:sdb Major:8 Minor:16 Size:10485760 Scheduler:deadline}] NetworkDevices:[{Name:enp0s3 MacAddress:02:15:29:52:b1:84 Speed:1000 Mtu:1500} {Name:enp0s8 MacAddress:08:00:27:4a:70:e4 Speed:1000 Mtu:1500} {Name:flannel.1 MacAddress:ba:53:27:6e:6e:e0 Speed:0 Mtu:1450}] Topology:[{Id:0 Memory:2097295360 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:6291456 Type:Unified Level:3} {Size:134217728 Type:Unified Level:4}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:6291456 Type:Unified Level:3} {Size:134217728 Type:Unified Level:4}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.253498    1870 manager.go:204] Version: {KernelVersion:4.4.0-103-generic ContainerOsVersion:Ubuntu 16.04.3 LTS DockerVersion:17.09.1-ce CadvisorVersion: CadvisorRevision:}
Dec 13 09:09:35 k8s02 kubelet[1870]: W1213 03:39:35.254524    1870 server.go:350] No api server defined - no events will be sent to API server.
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.254546    1870 server.go:509] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.256708    1870 container_manager_linux.go:245] container manager verified user specified cgroup-root exists: /
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.256737    1870 container_manager_linux.go:250] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs ProtectKernelDefaults:false EnableCRI:true NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[]}
Dec 13 09:09:35 k8s02 kubelet[1870]: W1213 03:39:35.259991    1870 kubelet_network.go:70] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.260167    1870 kubelet.go:494] Hairpin mode set to "hairpin-veth"
Dec 13 09:09:35 k8s02 kubelet[1870]: W1213 03:39:35.268388    1870 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.278033    1870 docker_service.go:187] Docker cri networking managed by kubernetes.io/no-op
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.286747    1870 docker_service.go:204] Setting cgroupDriver to cgroupfs
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.302470    1870 remote_runtime.go:41] Connecting to runtime service /var/run/dockershim.sock
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.305541    1870 kuberuntime_manager.go:171] Container runtime docker initialized, version: 17.09.1-ce, apiVersion: 1.32.0
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.307221    1870 server.go:869] Started kubelet v1.6.7
Dec 13 09:09:35 k8s02 kubelet[1870]: W1213 03:39:35.307415    1870 kubelet.go:1242] No api server defined - no node status update will be sent.
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.308683    1870 server.go:127] Starting to listen on 0.0.0.0:10250
Dec 13 09:09:35 k8s02 kubelet[1870]: E1213 03:39:35.318874    1870 kubelet.go:1165] Image garbage collection failed: unable to find data for container /
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.320089    1870 server.go:294] Adding debug handlers to kubelet server.
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.322382    1870 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Dec 13 09:09:35 k8s02 kubelet[1870]: E1213 03:39:35.326073    1870 kubelet.go:1661] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
Dec 13 09:09:35 k8s02 kubelet[1870]: E1213 03:39:35.326102    1870 kubelet.go:1669] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
Dec 13 09:09:35 k8s02 kubelet[1870]: E1213 03:39:35.347510    1870 container_manager_linux.go:821] Error parsing docker version "17.09.1-ce": illegal zero-prefixed version component "09" in "17.09.1-ce"
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.348036    1870 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.348100    1870 status_manager.go:136] Kubernetes client is nil, not starting status manager.
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.348107    1870 kubelet.go:1741] Starting kubelet main sync loop.
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.348161    1870 kubelet.go:1752] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.348558    1870 volume_manager.go:249] Starting Kubelet Volume Manager
Dec 13 09:09:35 k8s02 kubelet[1870]: E1213 03:39:35.348060    1870 event.go:259] Could not construct reference to: '&v1.Node{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"k8s02", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"beta.kubernetes.io/os":"linux", "beta.kubernetes.io/arch":"amd64", "kubernetes.io/hostname":"k8s02"}, Annotations:map[string]string{"volumes.kubernetes.io/controller-managed-attach-detach":"true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.NodeSpec{PodCIDR:"", ExternalID:"k8s02", ProviderID:"", Unschedulable:false, Taints:[]v1.Taint(nil)}, Status:v1.NodeStatus{Capacity:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:2000, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:2097295360, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}, "pods":resource.Quantity{i:resource.int64Amount{value:110, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"DecimalSI"}}, Allocatable:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:2000, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:1992437760, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}, "pods":resource.Quantity{i:resource.int64Amount{value:110, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"DecimalSI"}}, Phase:"", Conditions:[]v1.NodeCondition{v1.NodeCondition{Type:"OutOfDisk", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63648733175, nsec:326060133, loc:(*time.Location)(0x4e7b120)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63648733175, nse
Dec 13 09:09:35 k8s02 kubelet[1870]: c:326060133, loc:(*time.Location)(0x4e7b120)}}, Reason:"KubeletHasSufficientDisk", Message:"kubelet has sufficient disk space available"}, v1.NodeCondition{Type:"MemoryPressure", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63648733175, nsec:326227909, loc:(*time.Location)(0x4e7b120)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63648733175, nsec:326227909, loc:(*time.Location)(0x4e7b120)}}, Reason:"KubeletHasSufficientMemory", Message:"kubelet has sufficient memory available"}, v1.NodeCondition{Type:"DiskPressure", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63648733175, nsec:326237437, loc:(*time.Location)(0x4e7b120)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63648733175, nsec:326237437, loc:(*time.Location)(0x4e7b120)}}, Reason:"KubeletHasNoDiskPressure", Message:"kubelet has no disk pressure"}, v1.NodeCondition{Type:"Ready", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63648733175, nsec:326329098, loc:(*time.Location)(0x4e7b120)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63648733175, nsec:326329098, loc:(*time.Location)(0x4e7b120)}}, Reason:"KubeletNotReady", Message:"container runtime is down,PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s,network state unknown"}}, Addresses:[]v1.NodeAddress{v1.NodeAddress{Type:"LegacyHostIP", Address:"192.168.0.148"}, v1.NodeAddress{Type:"InternalIP", Address:"192.168.0.148"}, v1.NodeAddress{Type:"Hostname", Address:"k8s02"}}, DaemonEndpoints:v1.NodeDaemonEndpoints{KubeletEndpoint:v1.DaemonEndpoint{Port:10250}}, NodeInfo:v1.NodeSystemInfo{MachineID:"89ca704fa3ad49d0a93236a30defaf56", SystemUUID:"43DB5E50-1429-46A7-8144-7FAA6F9DEE5E", BootID:"ed517130-f312-4e2f-a9ad-0ae735ba910b", KernelVersion:"4.4.0-103-generic", OSImage:"Ubuntu 16.04.3 LTS", ContainerRuntimeVersion:"docker://Unknown", KubeletVersion:"v1.6.7", KubeProxyVersion:"v1.6.7", OperatingSystem:"linux", Architecture:"amd64"}, Images:[]v1.ContainerImage(nil), VolumesInUse:[]v1.UniqueVolumeName(nil), Volumes
Dec 13 09:09:35 k8s02 kubelet[1870]: Attached:[]v1.AttachedVolume(nil)}}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'NodeAllocatableEnforced' 'Updated Node Allocatable limit across pods'
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.387153    1870 factory.go:309] Registering Docker factory
Dec 13 09:09:35 k8s02 kubelet[1870]: W1213 03:39:35.387362    1870 manager.go:247] Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.387395    1870 factory.go:54] Registering systemd factory
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.387845    1870 factory.go:86] Registering Raw factory
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.390454    1870 manager.go:1106] Started watching for new ooms in manager
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.395973    1870 oomparser.go:185] oomparser using systemd
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.402535    1870 manager.go:288] Starting recovery of all containers
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.482283    1870 manager.go:293] Recovery completed
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.572919    1870 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Dec 13 09:09:35 k8s02 kubelet[1870]: I1213 03:39:35.576027    1870 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Dec 13 09:09:45 k8s02 kubelet[1870]: I1213 03:39:45.614509    1870 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Dec 13 09:09:45 k8s02 kubelet[1870]: I1213 03:39:45.616905    1870 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Dec 13 09:09:55 k8s02 kubelet[1870]: I1213 03:39:55.639268    1870 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Dec 13 09:09:55 k8s02 kubelet[1870]: I1213 03:39:55.641519    1870 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Dec 13 09:10:05 k8s02 kubelet[1870]: I1213 03:40:05.663484    1870 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Dec 13 09:10:05 k8s02 kubelet[1870]: I1213 03:40:05.666658    1870 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
Dec 13 09:10:15 k8s02 kubelet[1870]: I1213 03:40:15.696767    1870 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach

如何解决此错误?

0 个答案:

没有答案