来自Kubernetes(K8S)集群中的Pod内部的“ nslookup:读取:连接被拒绝”(DNS问题)

时间:2020-04-10 12:44:55

标签: kubernetes dns nslookup weave

问题

我在基于Centos 7的AWS ec2上自定义安装了具有1个主节点和1个节点的k8s集群,它使用Core-DNS(pod运行良好,日志中没有错误) 调用时在节点pod内nslookup google.com 输出为nslookup: write to '10.96.0.10': Connection refused ;; connection timed out; no servers could be reached

例如,在Pod ping 8.8.8.8内执行ping操作会很好:

PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=50 time=1.330 ms

/etc/resolv.conf在豆荚内的样子:

nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local ec2.internal
options ndots:5

此命令在节点nslookup google.com上正常工作:

Server:         172.31.0.2
Address:        172.31.0.2#53

Non-authoritative answer:
Name:   google.com
Address: 172.217.15.110
Name:   google.com
Address: 2607:f8b0:4004:801::200e

Kubelet配置kubectl get configmap kubelet-config-1.17 -n kube-system -o yaml返回

data:
  kubelet: |
    apiVersion: kubelet.config.k8s.io/v1beta1
    authentication:
      anonymous:
        enabled: false
      webhook:
        cacheTTL: 0s
        enabled: true
      x509:
        clientCAFile: /etc/kubernetes/pki/ca.crt
    authorization:
      mode: Webhook
      webhook:
        cacheAuthorizedTTL: 0s
        cacheUnauthorizedTTL: 0s
    clusterDNS:
    - 10.96.0.10
    clusterDomain: cluster.local
    cpuManagerReconcilePeriod: 0s
    evictionPressureTransitionPeriod: 0s
    fileCheckFrequency: 0s
    healthzBindAddress: 127.0.0.1
    healthzPort: 10248
    httpCheckFrequency: 0s
    imageMinimumGCAge: 0s
    kind: KubeletConfiguration
    nodeStatusReportFrequency: 0s
    nodeStatusUpdateFrequency: 0s
    rotateCertificates: true
    runtimeRequestTimeout: 0s
    staticPodPath: /etc/kubernetes/manifests
    streamingConnectionIdleTimeout: 0s
    syncFrequency: 0s
    volumeStatsAggPeriod: 0s
kind: ConfigMap

kube命名空间kubectl get pods -n kube-system中的荚看起来像这样:

coredns-6955765f44-qdbgx                                1/1     Running   6          11d
coredns-6955765f44-r4v7n                                1/1     Running   6          11d
etcd-ip-172-31-42-121.ec2.internal                      1/1     Running   7          11d
kube-apiserver-ip-172-31-42-121.ec2.internal            1/1     Running   7          11d
kube-controller-manager-ip-172-31-42-121.ec2.internal   1/1     Running   6          11d
kube-proxy-lrpd9                                        1/1     Running   6          11d
kube-proxy-z55cv                                        1/1     Running   6          11d
kube-scheduler-ip-172-31-42-121.ec2.internal            1/1     Running   6          11d
weave-net-bdn5n                                         2/2     Running   0          39h
weave-net-z7mks                                         2/2     Running   5          39h

更新

如果执行ip route,则从吊舱返回:

default via 10.32.0.1 dev eth0 
10.32.0.0/12 dev eth0 scope link  src 10.32.0.16 

来自大师:

default via 172.31.32.1 dev eth0 
10.32.0.0/12 dev weave proto kernel scope link src 10.32.0.1 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
172.31.32.0/20 dev eth0 proto kernel scope link src 172.31.42.121 

从节点:

default via 172.31.32.1 dev eth0 
10.32.0.0/12 dev weave proto kernel scope link src 10.32.0.1 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
172.31.32.0/20 dev eth0 proto kernel scope link src 172.31.46.62 

CoreDNS配置映射kubectl -n kube-system get configmap coredns -oyaml是:

apiVersion: v1
data:
  Corefile: |
    .:53 {
        log
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap

那为什么nslookup google.com不能在吊舱内工作?

1 个答案:

答案 0 :(得分:1)

k8s集群的安装错误,ansible脚本应包含ec2 vms上主节点和节点的正确私有IP。

dev-kubernetes-master ansible_host=34.233.207.xxx private_ip=172.31.37.xx
dev-kubernetes-slave ansible_host=52.6.10.xxx private_ip=172.31.42.xxx

我已经重新安装了群集,并指定了正确的专用ip(之前根本没有专用ip),问题已经解决了。