Kubernetes集群尝试无休止的gcr.io DNS查找,淹没路由器。有什么不对,怎么能阻止它呢?

时间:2016-06-08 09:31:13

标签: docker dns kubernetes raspberry-pi2

我在使用Hypriot OS(2015-11-15稳定版)的四个Raspberry Pi 2上运行Kubernetes 1.2.0 cluster。该设置是为演示目的而构建的。他们通过交换机联网,消费级路由器(IP 192.168.1.1)也连接到该交换机,运行DD-WRT,设置为无线网桥,DHCP服务器和DNS服务器(也是本地DNS所以Raspi可以通过主机名访问)。可以在Github上找到安装脚本和设置yamls。

问题是Raspi在UDP上产生了大量的DNS查询:53,他们威胁要压倒路由器,这显示了2600多个活动IP连接,1600个来自主节点和来自工作节点的~300。群集不运行任何部署,pod,服务或其他任何东西。未安装内部DNS(SkyDNS)。我不知道为什么所有这些查找可能都是必要的,但是它们会快速连续发射。只有4个节点,路由器仍然可以保持(几乎没有),但对于我计划在星期五做的演示,我将不得不连接至少4个,这可能会压倒路由器并带来在集群中。

为了解决这个问题,我试图找出我的群集似乎非常渴望解决的域名:

HypriotOS: root@rpi-node-21 in ~
$ tcpdump -vvv -s 0 -l -n port 53
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
10:29:39.724300 IP (tos 0x0, ttl 64, id 4189, offset 0, flags [DF], proto UDP (17), length 52)
    192.168.1.94.58760 > 192.168.1.1.53: [bad udp cksum 0x83e1 -> 0x3e07!] 32499+ A? gcr.io. (24)
10:29:39.724434 IP (tos 0x0, ttl 64, id 4190, offset 0, flags [DF], proto UDP (17), length 52)
    192.168.1.94.58760 > 192.168.1.1.53: [bad udp cksum 0x83e1 -> 0x076d!] 46450+ AAAA? gcr.io. (24)
10:29:39.725011 IP (tos 0x0, ttl 64, id 23734, offset 0, flags [DF], proto UDP (17), length 68)
    192.168.1.1.53 > 192.168.1.94.58760: [udp sum ok] 32499 q: A? gcr.io. 1/0/0 gcr.io. [10s] A 173.194.65.82 (40)
10:29:39.725226 IP (tos 0x0, ttl 64, id 23735, offset 0, flags [DF], proto UDP (17), length 80)
    192.168.1.1.53 > 192.168.1.94.58760: [udp sum ok] 46450 q: AAAA? gcr.io. 1/0/0 gcr.io. [10s] AAAA 2a00:1450:4013:c00::52 (52)
10:29:39.730163 IP (tos 0x0, ttl 64, id 4191, offset 0, flags [DF], proto UDP (17), length 52)
    192.168.1.94.46180 > 192.168.1.1.53: [bad udp cksum 0x83e1 -> 0xef5b!] 65218+ A? gcr.io. (24)

正如你所看到的,群集正在查找gcr.io,它在173.194.65.82处理得很好,然后立即再次查找(注意时间戳)。

有没有人知道可能会发生什么,更重要的是,除了粉碎Raspi并开始新西兰的狗步行服务之外,如何制止它?我将包含一些日志,并且我可以快速响应更多信息的请求。我真的希望有人可以帮助我,提前谢谢!

儒略

HypriotOS: root@rpi-master in ~
$ docker logs k8s-master
I0608 09:19:08.523757     769 server.go:137] Running kubelet in containerized mode (experimental)
W0608 09:19:39.449996     769 server.go:445] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Trying auth path instead.
W0608 09:19:39.450301     769 server.go:406] Could not load kubernetes auth path /var/lib/kubelet/kubernetes_auth: stat /var/lib/kubelet/kubernetes_auth: no such file or directory. Continuing with defaults.
I0608 09:19:39.451561     769 plugins.go:71] No cloud provider specified.
I0608 09:19:39.451704     769 server.go:312] Successfully initialized cloud provider: "" from the config file: ""
I0608 09:19:39.452446     769 manager.go:132] cAdvisor running in container: "/docker/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62"
I0608 09:19:41.022249     769 fs.go:109] Filesystem partitions: map[/dev/root:{mountpoint:/rootfs major:179 minor:2 fsType: blockSize:0}]
E0608 09:19:41.038167     769 machine.go:176] failed to get cache information for node 0: open /sys/devices/system/cpu/cpu0/cache: no such file or directory
I0608 09:19:43.098937     769 manager.go:169] Machine: {NumCores:4 CpuFrequency:900000 MemoryCapacity:970452992 MachineID:822a063820bf4276a8c5b4da928a438c SystemUUID:07c0f9c7ac2242e2954579d53e00b836 BootID:3148f74f-555c-4df9-ab12-79e04a88e086 Filesystems:[{Device:/dev/root Capacity:14946500608 Type:vfs Inodes:3796576}] DiskMap:map[179:0:{Name:mmcblk0 Major:179 Minor:0 Size:16021192704 Scheduler:deadline}] NetworkDevices:[{Name:eth0 MacAddress:b8:27:eb:8b:3c:c6 Speed:100 Mtu:1500} {Name:flannel0 MacAddress: Speed:10 Mtu:1472}] Topology:[{Id:0 Memory:0 Cores:[{Id:0 Threads:[0] Caches:[]} {Id:1 Threads:[1] Caches:[]} {Id:2 Threads:[2] Caches:[]} {Id:3 Threads:[3] Caches:[]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
I0608 09:19:43.109629     769 manager.go:175] Version: {KernelVersion:4.1.12-hypriotos-v7+ ContainerOsVersion:Debian GNU/Linux 8 (jessie) DockerVersion:1.9.0 CadvisorVersion: CadvisorRevision:}
I0608 09:19:43.118227     769 server.go:319] Using root directory: /var/lib/kubelet
I0608 09:19:43.119828     769 server.go:673] Adding manifest file: /etc/kubernetes/manifests-multi
I0608 09:19:43.120179     769 file.go:47] Watching path "/etc/kubernetes/manifests-multi"
I0608 09:19:43.120347     769 server.go:683] Watching apiserver
W0608 09:19:43.164980     769 kubelet.go:508] Hairpin mode set to "promiscuous-bridge" but configureCBR0 is false, falling back to "hairpin-veth"
I0608 09:19:43.165217     769 kubelet.go:276] Hairpin mode set to "hairpin-veth"
I0608 09:19:44.445117     769 manager.go:244] Setting dockerRoot to /var/lib/docker
I0608 09:19:44.452306     769 plugins.go:56] Registering credential provider: .dockercfg
I0608 09:19:44.458106     769 plugins.go:291] Loaded volume plugin "kubernetes.io/aws-ebs"
I0608 09:19:44.458441     769 plugins.go:291] Loaded volume plugin "kubernetes.io/empty-dir"
I0608 09:19:44.458994     769 plugins.go:291] Loaded volume plugin "kubernetes.io/gce-pd"
I0608 09:19:44.459312     769 plugins.go:291] Loaded volume plugin "kubernetes.io/git-repo"
I0608 09:19:44.459766     769 plugins.go:291] Loaded volume plugin "kubernetes.io/host-path"
I0608 09:19:44.460058     769 plugins.go:291] Loaded volume plugin "kubernetes.io/nfs"
I0608 09:19:44.460314     769 plugins.go:291] Loaded volume plugin "kubernetes.io/secret"
I0608 09:19:44.460872     769 plugins.go:291] Loaded volume plugin "kubernetes.io/iscsi"
I0608 09:19:44.461310     769 plugins.go:291] Loaded volume plugin "kubernetes.io/glusterfs"
I0608 09:19:44.461611     769 plugins.go:291] Loaded volume plugin "kubernetes.io/persistent-claim"
I0608 09:19:44.462352     769 plugins.go:291] Loaded volume plugin "kubernetes.io/rbd"
I0608 09:19:44.462801     769 plugins.go:291] Loaded volume plugin "kubernetes.io/cinder"
I0608 09:19:44.463297     769 plugins.go:291] Loaded volume plugin "kubernetes.io/cephfs"
I0608 09:19:44.463928     769 plugins.go:291] Loaded volume plugin "kubernetes.io/downward-api"
I0608 09:19:44.464562     769 plugins.go:291] Loaded volume plugin "kubernetes.io/fc"
I0608 09:19:44.465098     769 plugins.go:291] Loaded volume plugin "kubernetes.io/flocker"
I0608 09:19:44.465609     769 plugins.go:291] Loaded volume plugin "kubernetes.io/azure-file"
I0608 09:19:44.466192     769 plugins.go:291] Loaded volume plugin "kubernetes.io/configmap"
I0608 09:19:44.481512     769 server.go:632] Started kubelet
E0608 09:19:44.483696     769 kubelet.go:956] Image garbage collection failed: unable to find data for container /
I0608 09:19:44.483849     769 server.go:109] Starting to listen on 0.0.0.0:10250
I0608 09:19:44.484162     769 server.go:126] Starting to listen read-only on 0.0.0.0:10255
E0608 09:19:44.513219     769 event.go:202] Unable to write event: 'Post http://rpi-master:8080/api/v1/namespaces/default/events: dial tcp 127.0.1.1:8080: connection refused' (may retry after sleeping)
I0608 09:19:44.563938     769 container_manager_linux.go:207] Updating kernel flag: vm/overcommit_memory, expected value: 1, actual value: 0
I0608 09:19:44.564896     769 container_manager_linux.go:207] Updating kernel flag: kernel/panic, expected value: 10, actual value: 0
I0608 09:19:44.565542     769 container_manager_linux.go:207] Updating kernel flag: kernel/panic_on_oops, expected value: 1, actual value: 0
I0608 09:19:44.568361     769 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I0608 09:19:44.568627     769 manager.go:123] Starting to sync pod status with apiserver
I0608 09:19:44.568820     769 kubelet.go:2356] Starting kubelet main sync loop.
I0608 09:19:44.568969     769 kubelet.go:2365] skipping pod synchronization - [container runtime is down]
I0608 09:19:45.499027     769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84
I0608 09:19:45.499529     769 kubelet.go:1134] Attempting to register node 192.168.1.84
I0608 09:19:45.506507     769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused
I0608 09:19:46.039350     769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84
I0608 09:19:46.039646     769 kubelet.go:1134] Attempting to register node 192.168.1.84
I0608 09:19:46.043880     769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused
E0608 09:19:46.498331     769 event.go:202] Unable to write event: 'Post http://rpi-master:8080/api/v1/namespaces/default/events: dial tcp 127.0.1.1:8080: connection refused' (may retry after sleeping)
I0608 09:19:46.966327     769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84
I0608 09:19:46.966641     769 kubelet.go:1134] Attempting to register node 192.168.1.84
I0608 09:19:46.970968     769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused
I0608 09:19:47.512787     769 factory.go:230] Registering Docker factory
I0608 09:19:47.576324     769 factory.go:97] Registering Raw factory
I0608 09:19:48.044110     769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84
I0608 09:19:48.044409     769 kubelet.go:1134] Attempting to register node 192.168.1.84
I0608 09:19:48.049325     769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused
I0608 09:19:49.132613     769 manager.go:1003] Started watching for new ooms in manager
I0608 09:19:49.154846     769 oomparser.go:182] oomparser using systemd
I0608 09:19:49.172850     769 manager.go:256] Starting recovery of all containers
I0608 09:19:49.529570     769 manager.go:261] Recovery completed
I0608 09:19:49.569951     769 kubelet.go:2365] skipping pod synchronization - [container runtime is down]
I0608 09:19:49.781660     769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84
I0608 09:19:49.782820     769 kubelet.go:1134] Attempting to register node 192.168.1.84
I0608 09:19:49.796120     769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused
I0608 09:19:53.112626     769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84
I0608 09:19:53.112966     769 kubelet.go:1134] Attempting to register node 192.168.1.84
I0608 09:19:53.117777     769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused
I0608 09:19:54.571235     769 kubelet.go:2388] SyncLoop (ADD, "file"): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)"
E0608 09:19:54.571618     769 kubelet.go:2307] error getting node: node '192.168.1.84' is not in cache
I0608 09:19:54.572268     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"e736ec8218e250651b39758f3bbde22d4cdbb343e4118530d5791e4218786970"}
W0608 09:19:54.586217     769 manager.go:397] Failed to update status for pod "_()": Get http://rpi-master:8080/api/v1/namespaces/default/pods/k8s-master-192.168.1.84: dial tcp 127.0.1.1:8080: connection refused
I0608 09:19:54.597285     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"277772303bb1fa1c72ebe496016d1a3e00e961d5935c126c5285c0af76fa8456"}
E0608 09:19:54.609676     769 kubelet.go:1781] Failed creating a mirror pod for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)": Post http://rpi-master:8080/api/v1/namespaces/default/pods: dial tcp 127.0.1.1:8080: connection refused
I0608 09:19:54.678305     769 manager.go:1698] Need to restart pod infra container for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)" because it is not found
I0608 09:19:54.770520     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"a64055838d257678ba5178bc2589f66839971070c6735335682c80785e51c943"}
I0608 09:19:54.823445     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"33ee7433077053694ff60552c600a535307ccfd0d752a2339c5c739591098d2b"}
I0608 09:19:54.879917     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"1c51763f63dfa80f6bc634f662710b71bfa341c0c69009067e2c3ae4a8a1673e"}
I0608 09:19:54.926815     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"087f7e397a98370f3a201e39b49e875c96b3c8290993ed1fc4a42dc848b0680b"}
I0608 09:19:55.008764     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"5e6ab61a95df5120cec057e515ddb7679de385169b516b7f09d3ede4e9cd2f50"}
I0608 09:19:55.920613     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerStarted", Data:"390a981a905d603007fb3009953efa5bba54d26287eeff4c5cbc8983f039134f"}
E0608 09:19:56.521544     769 event.go:202] Unable to write event: 'Post http://rpi-master:8080/api/v1/namespaces/default/events: dial tcp 127.0.1.1:8080: connection refused' (may retry after sleeping)
I0608 09:19:57.315403     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerStarted", Data:"f818c0e9b622947a00cc8cc7ce719846c965bbe47a26c90bd7dcc6ec81c9ef0f"}
I0608 09:19:59.233783     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerStarted", Data:"defee550850fd55fc2ecb1a41fdd47129133d0b0b8f1576f8cff0c537022782a"}
I0608 09:19:59.830736     769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84
I0608 09:19:59.831073     769 kubelet.go:1134] Attempting to register node 192.168.1.84
I0608 09:19:59.837299     769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused
I0608 09:20:00.511849     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerStarted", Data:"79ea416c11adae72af1e454b07c5f00efcc6677c45a76d510cc0717dc7015806"}
W0608 09:20:00.518862     769 manager.go:397] Failed to update status for pod "_()": Get http://rpi-master:8080/api/v1/namespaces/default/pods/k8s-master-192.168.1.84: dial tcp 127.0.1.1:8080: connection refused
E0608 09:20:00.525216     769 kubelet.go:1781] Failed creating a mirror pod for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)": Post http://rpi-master:8080/api/v1/namespaces/default/pods: dial tcp 127.0.1.1:8080: connection refused
I0608 09:20:00.615637     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerStarted", Data:"162a0ec1abd0a329ff4f0582a72f2c47b9e99a1fbcc02409861b397f78480d16"}
E0608 09:20:01.612801     769 kubelet.go:1781] Failed creating a mirror pod for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)": Post http://rpi-master:8080/api/v1/namespaces/default/pods: dial tcp 127.0.1.1:8080: connection refused
E0608 09:20:02.672719     769 kubelet.go:1781] Failed creating a mirror pod for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)": Post http://rpi-master:8080/api/v1/namespaces/default/pods: dial tcp 127.0.1.1:8080: connection refused
W0608 09:20:04.572065     769 manager.go:397] Failed to update status for pod "_()": Get http://rpi-master:8080/api/v1/namespaces/default/pods/k8s-master-192.168.1.84: dial tcp 127.0.1.1:8080: connection refused
E0608 09:20:06.527979     769 event.go:202] Unable to write event: 'Post http://rpi-master:8080/api/v1/namespaces/default/events: dial tcp 127.0.1.1:8080: connection refused' (may retry after sleeping)
I0608 09:20:07.154072     769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84
I0608 09:20:07.154551     769 kubelet.go:1134] Attempting to register node 192.168.1.84
I0608 09:20:07.166567     769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused
I0608 09:20:10.483245     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"79ea416c11adae72af1e454b07c5f00efcc6677c45a76d510cc0717dc7015806"}
W0608 09:20:10.542522     769 manager.go:397] Failed to update status for pod "_()": Get http://rpi-master:8080/api/v1/namespaces/default/pods/k8s-master-192.168.1.84: dial tcp 127.0.1.1:8080: connection refused
E0608 09:20:10.548165     769 kubelet.go:1781] Failed creating a mirror pod for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)": Post http://rpi-master:8080/api/v1/namespaces/default/pods: dial tcp 127.0.1.1:8080: connection refused
I0608 09:20:11.954701     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerStarted", Data:"79a4cbedadc1a825bce592b0c4cde042ffea5aa65f7c4227c8aec379aa64012c"}
W0608 09:20:12.042905     769 manager.go:397] Failed to update status for pod "_()": Get http://rpi-master:8080/api/v1/namespaces/default/pods/k8s-master-192.168.1.84: dial tcp 127.0.1.1:8080: connection refused
E0608 09:20:12.044221     769 kubelet.go:1781] Failed creating a mirror pod for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)": Post http://rpi-master:8080/api/v1/namespaces/default/pods: dial tcp 127.0.1.1:8080: connection refused
I0608 09:20:14.288508     769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84
I0608 09:20:14.288868     769 kubelet.go:1134] Attempting to register node 192.168.1.84
I0608 09:20:14.300563     769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused
W0608 09:20:14.574069     769 manager.go:397] Failed to update status for pod "_()": Get http://rpi-master:8080/api/v1/namespaces/default/pods/k8s-master-192.168.1.84: dial tcp 127.0.1.1:8080: connection refused
E0608 09:20:16.536424     769 event.go:202] Unable to write event: 'Post http://rpi-master:8080/api/v1/namespaces/default/events: dial tcp 127.0.1.1:8080: connection refused' (may retry after sleeping)
I0608 09:20:21.433294     769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84
I0608 09:20:21.433579     769 kubelet.go:1134] Attempting to register node 192.168.1.84
I0608 09:20:21.439670     769 kubelet.go:1137] Unable to register 192.168.1.84 with the apiserver: Post http://rpi-master:8080/api/v1/nodes: dial tcp 127.0.1.1:8080: connection refused
I0608 09:20:23.007412     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerDied", Data:"79a4cbedadc1a825bce592b0c4cde042ffea5aa65f7c4227c8aec379aa64012c"}
E0608 09:20:23.094738     769 kubelet.go:1781] Failed creating a mirror pod for "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)": Post http://rpi-master:8080/api/v1/namespaces/default/pods: dial tcp 127.0.1.1:8080: connection refused
W0608 09:20:23.094918     769 manager.go:397] Failed to update status for pod "_()": Get http://rpi-master:8080/api/v1/namespaces/default/pods/k8s-master-192.168.1.84: dial tcp 127.0.1.1:8080: connection refused
I0608 09:20:23.112488     769 manager.go:2047] Back-off 10s restarting failed container=controller-manager pod=k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)
E0608 09:20:23.113255     769 pod_workers.go:138] Error syncing pod 9391883ad78c50e752d5748347ef9aa2, skipping: failed to "StartContainer" for "controller-manager" with CrashLoopBackOff: "Back-off 10s restarting failed container=controller-manager pod=k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)"
I0608 09:20:24.463284     769 kubelet.go:2391] SyncLoop (UPDATE, "api"): "k8s-master-192.168.1.84_default(15a52b5d-2cb3-11e6-ae88-b827eb8b3cc6)"
I0608 09:20:24.497971     769 manager.go:2047] Back-off 10s restarting failed container=controller-manager pod=k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)
E0608 09:20:24.498876     769 pod_workers.go:138] Error syncing pod 9391883ad78c50e752d5748347ef9aa2, skipping: failed to "StartContainer" for "controller-manager" with CrashLoopBackOff: "Back-off 10s restarting failed container=controller-manager pod=k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)"
W0608 09:20:27.051713     769 request.go:627] Throttling request took 99.568025ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events
W0608 09:20:27.251946     769 request.go:627] Throttling request took 169.564927ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/k8s-master-192.168.1.84.14561128dd6997d6
W0608 09:20:27.451762     769 request.go:627] Throttling request took 141.993996ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events
W0608 09:20:27.651819     769 request.go:627] Throttling request took 175.348684ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events
W0608 09:20:27.851906     769 request.go:627] Throttling request took 169.614146ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/k8s-master-192.168.1.84.14561128dd6997d6
W0608 09:20:28.051684     769 request.go:627] Throttling request took 155.040509ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events
I0608 09:20:28.573729     769 kubelet.go:2750] Recording NodeHasSufficientDisk event message for node 192.168.1.84
I0608 09:20:28.574075     769 kubelet.go:1134] Attempting to register node 192.168.1.84
I0608 09:20:28.745103     769 kubelet.go:1150] Node 192.168.1.84 was previously registered
W0608 09:20:28.851791     769 request.go:627] Throttling request took 122.413785ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/k8s-master-192.168.1.84.14561128dd6997d6
W0608 09:20:29.051663     769 request.go:627] Throttling request took 157.66653ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events
W0608 09:20:29.251789     769 request.go:627] Throttling request took 177.7883ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events
W0608 09:20:29.451806     769 request.go:627] Throttling request took 174.880614ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/192.168.1.84.145611266d7f848a
W0608 09:20:29.651741     769 request.go:627] Throttling request took 147.397079ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/192.168.1.84.145611266d7f848a
W0608 09:20:29.851871     769 request.go:627] Throttling request took 164.236896ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events
W0608 09:20:30.051664     769 request.go:627] Throttling request took 177.139919ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events
W0608 09:20:30.251706     769 request.go:627] Throttling request took 176.659299ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/k8s-master-192.168.1.84.1456112f2f6934f8
W0608 09:20:30.451679     769 request.go:627] Throttling request took 159.788336ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/k8s-master-192.168.1.84.1456112f2f7d5ddc
W0608 09:20:30.651761     769 request.go:627] Throttling request took 154.810042ms, request: PATCH:http://rpi-master:8080/api/v1/namespaces/default/events/192.168.1.84.145611266d7f848a
W0608 09:20:30.851640     769 request.go:627] Throttling request took 155.878888ms, request: POST:http://rpi-master:8080/api/v1/namespaces/default/events
I0608 09:20:37.134464     769 kubelet.go:2451] SyncLoop (PLEG): "k8s-master-192.168.1.84_default(9391883ad78c50e752d5748347ef9aa2)", event: &pleg.PodLifecycleEvent{ID:"9391883ad78c50e752d5748347ef9aa2", Type:"ContainerStarted", Data:"246f201be0479d48a0a44c4d4f8a95126d73ac04146e3029739cfd1da7d1ee77"}
E0608 09:20:55.460305     769 fsHandler.go:106] failed to collect filesystem stats - du command failed on /rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62 with output stdout: 238752    /rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62
, stderr: du: cannot access '/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62/merged/proc/679/task/702/fdinfo/19': No such file or directory
du: cannot access '/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62/merged/proc/679/task/737/fdinfo/19': No such file or directory
du: cannot access '/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62/merged/proc/679/task/738/fd/19': No such file or directory
du: cannot access '/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62/merged/proc/1116/task/1116/fd/3': No such file or directory
du: cannot access '/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62/merged/proc/1116/task/1116/fdinfo/3': No such file or directory
du: cannot access '/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62/merged/proc/1116/fd/4': No such file or directory
du: cannot access '/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62/merged/proc/1116/fdinfo/4': No such file or directory
 - exit status 1
I0608 09:20:55.460602     769 fsHandler.go:116] `du` on following dirs took 2.515023345s: [/rootfs/var/lib/docker/overlay/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62 /rootfs/var/lib/docker/containers/23c19dff6bca673029f1480f181089156640df376c46d371e4e7c438a9701d62]

1 个答案:

答案 0 :(得分:0)

我设法错误地通过向173.194.65.82 gcr.io添加/etc/hosts来“解决”问题,这至少会阻止传出的DNS查找淹没路由器,因为域是在本地解析的。我想这明天会对我的演示有用,因为至少我会有一个功能正常的集群,这对DDOS我的路由器来说并不明显。

虽然这是非常丑陋的,但是我几乎将Raspi的一张短片与悲伤的泪水从我的眼睛里掉了下来。如果有人有建议,我仍然有兴趣解决潜在的问题!