KubeDNS错误,服务器行为不当

时间:2018-05-22 12:59:03

标签: docker dns kubernetes flannel kube-dns

我在尝试执行容器时遇到问题:

kubectl exec -it busybox-68654f944b-hj672 -- nslookup kubernetes
Error from server: error dialing backend: dial tcp: lookup worker2 on 127.0.0.53:53: server misbehaving

从容器中获取日志:

kubectl -n kube-system logs kube-dns-598d7bf7d4-p99qr kubedns
Error from server: Get https://worker3:10250/containerLogs/kube-system/kube-dns-598d7bf7d4-p99qr/kubedns: dial tcp: lookup worker3 on 127.0.0.53:53: server misbehaving

我的想法已经不多了...... 我主要遵循kubernetes-hard-way方式,但已将其安装在DigitalOcean上并使用Flannel进行pod网络连接(我也使用digitalocean-cloud-manager似乎运行良好)。< / p>

此外,似乎kube-proxy有效,日志中的一切看起来都不错,iptable配置看起来不错(对我/菜鸟)

网络:

  • 10.244.0.0/16 Flannel / Pod network
  • 10.32.0.0/24 kube-proxy()/服务群集
  • kube3 206.x.x.211 / 10.133.55.62
  • kube1 206.x.x.80 / 10.133.52.77
  • kube2 206.x.x.213 / 10.133.55.73
  • worker1 167.x.x.148 / 10.133.56.88
  • worker3 206.x.x.121 / 10.133.55.220
  • worker2 206.x.x.113 / 10.133.56.89

所以,我的日志:

KUBE-DNS:

E0522 12:22:32 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Get https://10.32.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.32.0.1:443: getsockopt: no route to host
E0522 12:22:32 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Get https://10.32.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.32.0.1:443: getsockopt: no route to host
I0522 12:22:32 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0522 12:22:33 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0522 12:22:33 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
F0522 12:22:34 dns.go:167] Timeout waiting for initialization

库贝代理:

I0522 12:36:37 flags.go:27] FLAG: --alsologtostderr="false"
I0522 12:36:37 flags.go:27] FLAG: --bind-address="0.0.0.0"
I0522 12:36:37 flags.go:27] FLAG: --cleanup="false"
I0522 12:36:37 flags.go:27] FLAG: --cleanup-iptables="false"
I0522 12:36:37 flags.go:27] FLAG: --cleanup-ipvs="true"
I0522 12:36:37 flags.go:27] FLAG: --cluster-cidr=""
I0522 12:36:37 flags.go:27] FLAG: --config="/var/lib/kube-proxy/kube-proxy-config.yaml"
I0522 12:36:37 flags.go:27] FLAG: --config-sync-period="15m0s"
I0522 12:36:37 flags.go:27] FLAG: --conntrack-max="0"
I0522 12:36:37 flags.go:27] FLAG: --conntrack-max-per-core="32768"
I0522 12:36:37 flags.go:27] FLAG: --conntrack-min="131072"
I0522 12:36:37 flags.go:27] FLAG: --conntrack-tcp-timeout-close-wait="1h0m0s"
I0522 12:36:37 flags.go:27] FLAG: --conntrack-tcp-timeout-established="24h0m0s"
I0522 12:36:37 flags.go:27] FLAG: --feature-gates=""
I0522 12:36:37 flags.go:27] FLAG: --healthz-bind-address="0.0.0.0:10256"
I0522 12:36:37 flags.go:27] FLAG: --healthz-port="10256"
I0522 12:36:37 flags.go:27] FLAG: --help="false"
I0522 12:36:37 flags.go:27] FLAG: --hostname-override=""
I0522 12:36:37 flags.go:27] FLAG: --iptables-masquerade-bit="14"
I0522 12:36:37 flags.go:27] FLAG: --iptables-min-sync-period="0s"
I0522 12:36:37 flags.go:27] FLAG: --iptables-sync-period="30s"
I0522 12:36:37 flags.go:27] FLAG: --ipvs-min-sync-period="0s"
I0522 12:36:37 flags.go:27] FLAG: --ipvs-scheduler=""
I0522 12:36:37 flags.go:27] FLAG: --ipvs-sync-period="30s"
I0522 12:36:37 flags.go:27] FLAG: --kube-api-burst="10"
I0522 12:36:37 flags.go:27] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
I0522 12:36:37 flags.go:27] FLAG: --kube-api-qps="5"
I0522 12:36:37 flags.go:27] FLAG: --kubeconfig=""
I0522 12:36:37 flags.go:27] FLAG: --log-backtrace-at=":0"
I0522 12:36:37 flags.go:27] FLAG: --log-dir=""
I0522 12:36:37 flags.go:27] FLAG: --log-flush-frequency="5s"
I0522 12:36:37 flags.go:27] FLAG: --logtostderr="true"
I0522 12:36:37 flags.go:27] FLAG: --masquerade-all="false"
I0522 12:36:37 flags.go:27] FLAG: --master=""
I0522 12:36:37 flags.go:27] FLAG: --metrics-bind-address="127.0.0.1:10249"
I0522 12:36:37 flags.go:27] FLAG: --nodeport-addresses="[]"
I0522 12:36:37 flags.go:27] FLAG: --oom-score-adj="-999"
I0522 12:36:37 flags.go:27] FLAG: --profiling="false"
I0522 12:36:37 flags.go:27] FLAG: --proxy-mode=""
I0522 12:36:37 flags.go:27] FLAG: --proxy-port-range=""
I0522 12:36:37 flags.go:27] FLAG: --resource-container="/kube-proxy"
I0522 12:36:37 flags.go:27] FLAG: --stderrthreshold="2"
I0522 12:36:37 flags.go:27] FLAG: --udp-timeout="250ms"
I0522 12:36:37 flags.go:27] FLAG: --v="4"
I0522 12:36:37 flags.go:27] FLAG: --version="false"
I0522 12:36:37 flags.go:27] FLAG: --vmodule=""
I0522 12:36:37 flags.go:27] FLAG: --write-config-to=""
I0522 12:36:37 feature_gate.go:226] feature gates: &{{} map[]}
I0522 12:36:37 iptables.go:589] couldn't get iptables-restore version; assuming it doesn't support --wait
I0522 12:36:37 server_others.go:140] Using iptables Proxier.
I0522 12:36:37 proxier.go:346] minSyncPeriod: 0s, syncPeriod: 30s, burstSyncs: 2
I0522 12:36:37 server_others.go:174] Tearing down inactive rules.
I0522 12:36:37 server.go:444] Version: v1.10.2
I0522 12:36:37 oom_linux.go:65] attempting to set "/proc/self/oom_score_adj" to "-999"
I0522 12:36:37 server.go:470] Running in resource-only container "/kube-proxy"
I0522 12:36:37 healthcheck.go:309] Starting goroutine for healthz on 0.0.0.0:10256
I0522 12:36:37 server.go:591] getConntrackMax: using conntrack-min
I0522 12:36:37 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0522 12:36:37 conntrack.go:52] Setting nf_conntrack_max to 131072
I0522 12:36:37 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0522 12:36:37 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0522 12:36:37 bounded_frequency_runner.go:170] sync-runner Loop running
I0522 12:36:37 config.go:102] Starting endpoints config controller
I0522 12:36:37 config.go:202] Starting service config controller
I0522 12:36:37 controller_utils.go:1019] Waiting for caches to sync for service config controller
I0522 12:36:37 reflector.go:202] Starting reflector *core.Endpoints (15m0s) from k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86
I0522 12:36:37 reflector.go:240] Listing and watching *core.Endpoints from k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86
I0522 12:36:37 reflector.go:202] Starting reflector *core.Service (15m0s) from k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86
I0522 12:36:37 reflector.go:240] Listing and watching *core.Service from k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 endpoints.go:234] Setting endpoints for "kube-system/kubernetes-dashboard:" to [10.244.0.2:8443]
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 endpoints.go:234] Setting endpoints for "default/hostnames:" to [10.244.0.3:9376 10.244.0.4:9376 10.244.0.4:9376]
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 endpoints.go:234] Setting endpoints for "default/kubernetes:https" to [10.133.52.77:6443 10.133.55.62:6443 10.133.55.73:6443]
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 endpoints.go:234] Setting endpoints for "kube-system/kube-dns:dns" to []
I0522 12:36:37 endpoints.go:234] Setting endpoints for "kube-system/kube-dns:dns-tcp" to []
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 config.go:224] Calling handler.OnServiceAdd
I0522 12:36:37 config.go:224] Calling handler.OnServiceAdd
I0522 12:36:37 config.go:224] Calling handler.OnServiceAdd
I0522 12:36:37 config.go:224] Calling handler.OnServiceAdd
I0522 12:36:37 controller_utils.go:1019] Waiting for caches to sync for endpoints config controller
I0522 12:36:37 shared_informer.go:123] caches populated
I0522 12:36:37 controller_utils.go:1026] Caches are synced for service config controller
I0522 12:36:37 config.go:210] Calling handler.OnServiceSynced()
I0522 12:36:37 proxier.go:623] Not syncing iptables until Services and Endpoints have been received from master
I0522 12:36:37 proxier.go:619] syncProxyRules took 38.306µs
I0522 12:36:37 shared_informer.go:123] caches populated
I0522 12:36:37 controller_utils.go:1026] Caches are synced for endpoints config controller
I0522 12:36:37 config.go:110] Calling handler.OnEndpointsSynced()
I0522 12:36:37 service.go:310] Adding new service port "default/kubernetes:https" at 10.32.0.1:443/TCP
I0522 12:36:37 service.go:310] Adding new service port "kube-system/kube-dns:dns" at 10.32.0.10:53/UDP
I0522 12:36:37 service.go:310] Adding new service port "kube-system/kube-dns:dns-tcp" at 10.32.0.10:53/TCP
I0522 12:36:37 service.go:310] Adding new service port "kube-system/kubernetes-dashboard:" at 10.32.0.175:443/TCP
I0522 12:36:37 service.go:310] Adding new service port "default/hostnames:" at 10.32.0.16:80/TCP
I0522 12:36:37 proxier.go:642] Syncing iptables rules
I0522 12:36:37 iptables.go:321] running iptables-save [-t filter]
I0522 12:36:37 iptables.go:321] running iptables-save [-t nat]
I0522 12:36:37 iptables.go:381] running iptables-restore [--noflush --counters]
I0522 12:36:37 healthcheck.go:235] Not saving endpoints for unknown healthcheck "default/hostnames"
I0522 12:36:37 proxier.go:619] syncProxyRules took 62.713913ms
I0522 12:36:38 config.go:141] Calling handler.OnEndpointsUpdate
I0522 12:36:38 config.go:141] Calling handler.OnEndpointsUpdate
I0522 12:36:40 config.go:141] Calling handler.OnEndpointsUpdate
I0522 12:36:40 config.go:141] Calling handler.OnEndpointsUpdate

iptables -L -t nat

Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
KUBE-SERVICES  all  --  anywhere             anywhere             /* kubernetes service portals */
DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-SERVICES  all  --  anywhere             anywhere             /* kubernetes service portals */
DOCKER     all  --  anywhere            !localhost/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
KUBE-POSTROUTING  all  --  anywhere             anywhere             /* kubernetes postrouting rules */
MASQUERADE  all  --  172.17.0.0/16        anywhere            
RETURN     all  --  10.244.0.0/16        10.244.0.0/16       
MASQUERADE  all  --  10.244.0.0/16       !base-address.mcast.net/4 
RETURN     all  -- !10.244.0.0/16        worker3/24          
MASQUERADE  all  -- !10.244.0.0/16        10.244.0.0/16       
CNI-9f557b5f70a3ef9b57012dc9  all  --  10.244.0.0/16        anywhere             /* name: "bridge" id: "0d9b7e94498291d71ff1952655da822ab1a1f7c4e080d119ff0ca84a506f05f5" */
CNI-3f77e9111033967f6fe3038c  all  --  10.244.0.0/16        anywhere             /* name: "bridge" id: "3b535dda0868b2d75046fc76de3279de2874652b6731a87815908ecf40dd1924" */

Chain CNI-3f77e9111033967f6fe3038c (1 references)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             10.244.0.0/16        /* name: "bridge" id: "3b535dda0868b2d75046fc76de3279de2874652b6731a87815908ecf40dd1924" */
MASQUERADE  all  --  anywhere            !base-address.mcast.net/4  /* name: "bridge" id: "3b535dda0868b2d75046fc76de3279de2874652b6731a87815908ecf40dd1924" */

Chain CNI-9f557b5f70a3ef9b57012dc9 (1 references)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             10.244.0.0/16        /* name: "bridge" id: "0d9b7e94498291d71ff1952655da822ab1a1f7c4e080d119ff0ca84a506f05f5" */
MASQUERADE  all  --  anywhere            !base-address.mcast.net/4  /* name: "bridge" id: "0d9b7e94498291d71ff1952655da822ab1a1f7c4e080d119ff0ca84a506f05f5" */

Chain DOCKER (2 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere            

Chain KUBE-MARK-DROP (0 references)
target     prot opt source               destination         
MARK       all  --  anywhere             anywhere             MARK or 0x8000

Chain KUBE-MARK-MASQ (10 references)
target     prot opt source               destination         
MARK       all  --  anywhere             anywhere             MARK or 0x4000

Chain KUBE-NODEPORTS (1 references)
target     prot opt source               destination         

Chain KUBE-POSTROUTING (1 references)
target     prot opt source               destination         
MASQUERADE  all  --  anywhere             anywhere             /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000

Chain KUBE-SEP-372W2QPHULAJK7KN (2 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.133.52.77         anywhere             /* default/kubernetes:https */
DNAT       tcp  --  anywhere             anywhere             /* default/kubernetes:https */ recent: SET name: KUBE-SEP-372W2QPHULAJK7KN side: source mask: 255.255.255.255 tcp to:10.133.52.77:6443

Chain KUBE-SEP-F5C5FPCVD73UOO2K (2 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.133.55.73         anywhere             /* default/kubernetes:https */
DNAT       tcp  --  anywhere             anywhere             /* default/kubernetes:https */ recent: SET name: KUBE-SEP-F5C5FPCVD73UOO2K side: source mask: 255.255.255.255 tcp to:10.133.55.73:6443

Chain KUBE-SEP-LFOBDGSNKNVH4XYX (2 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.133.55.62         anywhere             /* default/kubernetes:https */
DNAT       tcp  --  anywhere             anywhere             /* default/kubernetes:https */ recent: SET name: KUBE-SEP-LFOBDGSNKNVH4XYX side: source mask: 255.255.255.255 tcp to:10.133.55.62:6443

Chain KUBE-SEP-NBPTKIZVPOJSUO47 (2 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.0.4           anywhere             /* default/hostnames: */
DNAT       tcp  --  anywhere             anywhere             /* default/hostnames: */ tcp to:10.244.0.4:9376
KUBE-MARK-MASQ  all  --  10.244.0.4           anywhere             /* default/hostnames: */
DNAT       tcp  --  anywhere             anywhere             /* default/hostnames: */ tcp to:10.244.0.4:9376

Chain KUBE-SEP-OT5RYZRAA2AMYTNV (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.0.2           anywhere             /* kube-system/kubernetes-dashboard: */
DNAT       tcp  --  anywhere             anywhere             /* kube-system/kubernetes-dashboard: */ tcp to:10.244.0.2:8443

Chain KUBE-SEP-XDZOTYYMKVEAAZHH (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.0.3           anywhere             /* default/hostnames: */
DNAT       tcp  --  anywhere             anywhere             /* default/hostnames: */ tcp to:10.244.0.3:9376

Chain KUBE-SERVICES (2 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  tcp  -- !10.244.0.0/16        10.32.0.1            /* default/kubernetes:https cluster IP */ tcp dpt:https
KUBE-SVC-NPX46M4PTMTKRN6Y  tcp  --  anywhere             10.32.0.1            /* default/kubernetes:https cluster IP */ tcp dpt:https
KUBE-MARK-MASQ  tcp  -- !10.244.0.0/16        10.32.0.175          /* kube-system/kubernetes-dashboard: cluster IP */ tcp dpt:https
KUBE-SVC-XGLOHA7QRQ3V22RZ  tcp  --  anywhere             10.32.0.175          /* kube-system/kubernetes-dashboard: cluster IP */ tcp dpt:https
KUBE-MARK-MASQ  tcp  -- !10.244.0.0/16        10.32.0.16           /* default/hostnames: cluster IP */ tcp dpt:http
KUBE-SVC-NWV5X2332I4OT4T3  tcp  --  anywhere             10.32.0.16           /* default/hostnames: cluster IP */ tcp dpt:http
KUBE-NODEPORTS  all  --  anywhere             anywhere             /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL

Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
target     prot opt source               destination         
KUBE-SEP-372W2QPHULAJK7KN  all  --  anywhere             anywhere             /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-372W2QPHULAJK7KN side: source mask: 255.255.255.255
KUBE-SEP-LFOBDGSNKNVH4XYX  all  --  anywhere             anywhere             /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-LFOBDGSNKNVH4XYX side: source mask: 255.255.255.255
KUBE-SEP-F5C5FPCVD73UOO2K  all  --  anywhere             anywhere             /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-F5C5FPCVD73UOO2K side: source mask: 255.255.255.255
KUBE-SEP-372W2QPHULAJK7KN  all  --  anywhere             anywhere             /* default/kubernetes:https */ statistic mode random probability 0.33332999982
KUBE-SEP-LFOBDGSNKNVH4XYX  all  --  anywhere             anywhere             /* default/kubernetes:https */ statistic mode random probability 0.50000000000
KUBE-SEP-F5C5FPCVD73UOO2K  all  --  anywhere             anywhere             /* default/kubernetes:https */

Chain KUBE-SVC-NWV5X2332I4OT4T3 (1 references)
target     prot opt source               destination         
KUBE-SEP-XDZOTYYMKVEAAZHH  all  --  anywhere             anywhere             /* default/hostnames: */ statistic mode random probability 0.33332999982
KUBE-SEP-NBPTKIZVPOJSUO47  all  --  anywhere             anywhere             /* default/hostnames: */ statistic mode random probability 0.50000000000
KUBE-SEP-NBPTKIZVPOJSUO47  all  --  anywhere             anywhere             /* default/hostnames: */

Chain KUBE-SVC-XGLOHA7QRQ3V22RZ (1 references)
target     prot opt source               destination         
KUBE-SEP-OT5RYZRAA2AMYTNV  all  --  anywhere             anywhere             /* kube-system/kubernetes-dashboard: */

kubelet

W12:43:36 prober.go:103] No ref for container "containerd://6405ae121704b15554e019beb622fbcf991e0d3c75b20eab606e147dc1e6966f" (kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns)
I12:43:36 prober.go:111] Readiness probe for "kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns" failed (failure): Get http://10.244.0.2:8081/readiness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W12:43:46 prober.go:103] No ref for container "containerd://6405ae121704b15554e019beb622fbcf991e0d3c75b20eab606e147dc1e6966f" (kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns)
I12:43:46 prober.go:111] Readiness probe for "kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns" failed (failure): Get http://10.244.0.2:8081/readiness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W12:43:56 prober.go:103] No ref for container "containerd://6405ae121704b15554e019beb622fbcf991e0d3c75b20eab606e147dc1e6966f" (kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns)
I12:43:56 prober.go:111] Readiness probe for "kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns" failed (failure): Get http://10.244.0.2:8081/readiness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W12:44:06 prober.go:103] No ref for container "containerd://6405ae121704b15554e019beb622fbcf991e0d3c75b20eab606e147dc1e6966f" (kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns)
I12:44:06 prober.go:111] Readiness probe for "kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns" failed (failure): Get http://10.244.0.2:8081/readiness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

配置:

工人:

kubelet:

systemd服务:

/usr/local/bin/kubelet \
  --config=/var/lib/kubelet/kubelet-config.yaml \
  --container-runtime=remote \
  --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \
  --image-pull-progress-deadline=2m \
  --kubeconfig=/var/lib/kubelet/kubeconfig \
  --network-plugin=cni \
  --register-node=true \
  --v=2 \
  --cloud-provider=external \
  --allow-privileged=true

kubelet-config.yaml:

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
  mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
  - "10.32.0.10"
podCIDR: "10.244.0.0/16"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/worker3.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/worker3-key.pem"

KUBE代理:

systemd服务:

ExecStart = / usr / local / bin / kube-proxy \      --config = / var / lib / kube-proxy / kube-proxy-config.yaml -v 4

KUBE代理-config.yaml:

kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.244.0.0/16"

kubeconfig:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ASLDJL...ALKJDS=
    server: https://206.x.x.7:6443
  name: kubernetes-the-hard-way
contexts:
- context:
    cluster: kubernetes-the-hard-way
    user: system:kube-proxy
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: system:kube-proxy
  user:
    client-certificate-data: ASDLJAL ... ALDJS
    client-key-data: LS0tLS1CRUdJ...ASDJ

控制器:

KUBE-API服务器:

ExecStart=/usr/local/bin/kube-apiserver \
  --advertise-address=10.133.55.62 \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/audit.log \
  --authorization-mode=Node,RBAC \
  --bind-address=0.0.0.0 \
  --client-ca-file=/var/lib/kubernetes/ca.pem \
  --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --enable-swagger-ui=true \
  --etcd-cafile=/var/lib/kubernetes/ca.pem \
  --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \
  --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \
  --etcd-servers=https://10.133.55.73:2379,https://10.133.52.77:2379,https://10.133.55.62:2379 \
  --event-ttl=1h \
  --experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \
  --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \
  --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \
  --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \
  --kubelet-https=true \
  --runtime-config=api/all \
  --service-account-key-file=/var/lib/kubernetes/service-account.pem \
  --service-cluster-ip-range=10.32.0.0/24 \
  --service-node-port-range=30000-32767 \
  --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \
  --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \
  --v=2

KUBE控制器的管理器

ExecStart=/usr/local/bin/kube-controller-manager \
  --address=0.0.0.0 \
  --cluster-cidr=10.244.0.0/16 \
  --allocate-node-cidrs=true \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \
  --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \
  --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
  --leader-elect=true \
  --root-ca-file=/var/lib/kubernetes/ca.pem \
  --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \
  --service-cluster-ip-range=10.32.0.0/24 \
  --use-service-account-credentials=true \
  --v=2

法兰绒配置/日志:

https://pastebin.com/hah0uSFX (因为帖子太长了!)

编辑:

route

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         _gateway        0.0.0.0         UG    0      0        0 eth0
10.18.0.0       0.0.0.0         255.255.0.0     U     0      0        0 eth0
10.133.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth1
10.244.0.0      10.244.0.0      255.255.255.0   UG    0      0        0 flannel.1
10.244.0.0      0.0.0.0         255.255.0.0     U     0      0        0 cnio0
10.244.1.0      10.244.1.0      255.255.255.0   UG    0      0        0 flannel.1
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
206.189.96.0    0.0.0.0         255.255.240.0   U     0      0        0 eth0

ip route get 10.32.0.110.32.0.1 via 206.189.96.1 dev eth0 src 206.189.96.121 uid 0

curl -k https://10.32.0.1:443/version 
{
  "major": "1",
  "minor": "10",
  "gitVersion": "v1.10.2",
  "gitCommit": "81753b10df112992bf51bbc2c2f85208aad78335",
  "gitTreeState": "clean",
  "buildDate": "2018-04-27T09:10:24Z",
  "goVersion": "go1.9.3",
  "compiler": "gc",
  "platform": "linux/amd64"
}

重新启动所有工作人员和pod,包括kube-dns,所以他们不再崩溃,但是当我尝试执行或运行时,我仍然有一些问题:

kubectl run test --image=ubuntu -it --rm bash
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: error dialing backend: dial tcp: lookup worker3 on 127.0.0.53:53: server misbehaving
Error from server: Get https://worker3:10250/containerLogs/default/test-6954947c4f-6gkdl/test: dial tcp: lookup worker3 on 127.0.0.53:53: server misbehavin

0 个答案:

没有答案