Kubernetes NodePort连接被拒绝

时间:2020-07-16 16:58:58

标签: docker kubernetes centos load-balancing

在virtualbox环境中,我有一个包含3个节点的群集。我创建了带有标志的集群

kubeadm init --pod-network-cidr=10.244.0.0/16

然后,我安装了法兰绒并将其余两个节点添加到集群中。之后,创建了新的虚拟机来托管Docker映像的专用存储库。接下来,我使用此.yaml创建我的应用程序的部署:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: gunicorn
spec:
  selector:
    matchLabels:
      app: gunicorn
  replicas: 1
  template:
    metadata:
      labels:
        app: gunicorn
    spec:
      imagePullSecrets:
      - name: my-registry-key
      containers:
      - name: ipcheck2
        image: 192.168.2.4:8083/ipcheck2:1
        imagePullPolicy: Always
        command:
        - sleep
        - "infinity"
        ports:
        - containerPort: 8080
          hostPort: 8080

图像是从以下dockerfile创建的,并被推送到存储库:

FROM python:3

EXPOSE 8080

ADD /IP_check/ /

WORKDIR /

RUN pip install pip --upgrade

RUN pip install -r requirements.txt

CMD ["gunicorn", "IP_check.wsgi", "-b :8080"]

此刻,我可以告诉我,如果我从docker引擎端运行容器,则可以暴露此端口,从而可以与应用程序连接。

接下来,我为我的应用创建了NodePort服务:

apiVersion: v1
kind: Service
metadata:
  name: ipcheck
spec:
  selector:
    app: gunicorn
  ports:
  - port: 70
    targetPort: 8080
    nodePort: 30000
  type: NodePort

这是问题所在。我检查了kubectl describe pods,哪个节点正在使用我的应用程序运行pod。然后,我尝试使用curl:30000到达应用程序,但是它不起作用。

curl: (7) Failed connect to 192.168.2.3:30000; Connection refused

我还从kubernetes documentation安装了hello-world应用程序,并通过NodePort将其公开。这也行不通。

任何人都知道为什么我无法使用NodePort从集群内部和集群外部到达Pod?

OS:Centos7

IP地址:

Node1 192.168.2.1   -   Master
Node2 192.168.2.2   -   Worker
Node3 192.168.2.3   -   Worker
Node4 192.168.2.4   -   Private repo (outside of cluster)

Pod描述:

Name:         gunicorn-5f7f485585-wjdnf
Namespace:    default
Priority:     0
Node:         node3/192.168.2.3
Start Time:   Thu, 16 Jul 2020 18:01:54 +0200
Labels:       app=gunicorn
              pod-template-hash=5f7f485585
Annotations:  <none>
Status:       Running
IP:           10.244.1.20
IPs:
  IP:           10.244.1.20
Controlled By:  ReplicaSet/gunicorn-5f7f485585
Containers:
  ipcheck2:
    Container ID:  docker://9aa18f3fff1d13dfc76355dde72554fd3af304435c9b7fc4f7365b4e6ac9059a
    Image:         192.168.2.4:8083/ipcheck2:1
    Image ID:      docker-pullable://192.168.2.4:8083/ipcheck2@sha256:e48469c6d1bec474b32cd04ca5ccbc057da0377dff60acc37e7fa786cbc39526
    Port:          8080/TCP
    Host Port:     8080/TCP
    Command:
      sleep
      infinity
    State:          Running
      Started:      Thu, 16 Jul 2020 18:01:55 +0200
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9q77c (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-9q77c:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-9q77c
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  40m   default-scheduler  Successfully assigned default/gunicorn-5f7f485585-wjdnf to node3
  Normal  Pulling    40m   kubelet, node3     Pulling image "192.168.2.4:8083/ipcheck2:1"
  Normal  Pulled     40m   kubelet, node3     Successfully pulled image "192.168.2.4:8083/ipcheck2:1"
  Normal  Created    40m   kubelet, node3     Created container ipcheck2
  Normal  Started    40m   kubelet, node3     Started container ipcheck2

服务描述:

Name:                     ipcheck
Namespace:                default
Labels:                   <none>
Annotations:              Selector:  app=gunicorn
Type:                     NodePort
IP:                       10.111.7.129
Port:                     <unset>  70/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30000/TCP
Endpoints:                10.244.1.20:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

Node3 iptables:

Chain INPUT (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere

Chain FORWARD (policy DROP)
target     prot opt source               destination
KUBE-FORWARD  all  --  anywhere             anywhere             /* kubernetes forwarding rules */
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
DOCKER-USER  all  --  anywhere             anywhere
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  10.244.0.0/16        anywhere
ACCEPT     all  --  anywhere             10.244.0.0/16

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere

Chain DOCKER (1 references)
target     prot opt source               destination

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere
RETURN     all  --  anywhere             anywhere

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere
RETURN     all  --  anywhere             anywhere

Chain DOCKER-USER (1 references)
target     prot opt source               destination
RETURN     all  --  anywhere             anywhere

Chain KUBE-EXTERNAL-SERVICES (1 references)
target     prot opt source               destination
REJECT     tcp  --  anywhere             anywhere             /* default/gunicorn-ipcheck: has no endpoints */ ADDRTYPE match dst-type LOCAL tcp dpt:30384 reject-with icmp-port-unreachable

Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere             /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000

Chain KUBE-FORWARD (1 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere             ctstate INVALID
ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED

Chain KUBE-KUBELET-CANARY (0 references)
target     prot opt source               destination

Chain KUBE-PROXY-CANARY (0 references)
target     prot opt source               destination

Chain KUBE-SERVICES (3 references)
target     prot opt source               destination
REJECT     tcp  --  anywhere             10.104.59.152        /* default/gunicorn-ipcheck: has no endpoints */ tcp dpt:webcache reject-with icmp-port-unreachable
REJECT     tcp  --  anywhere             192.168.2.240        /* default/gunicorn-ipcheck: has no endpoints */ tcp dpt:webcache reject-with icmp-port-unreachable

Node3上的“ ip a”:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:a4:1d:ff brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic enp0s3
       valid_lft 86181sec preferred_lft 86181sec
    inet6 fe80::1272:64b5:b03b:2b75/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:14:7f:ad brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.3/24 brd 192.168.2.255 scope global noprefixroute enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::2704:2b92:cc02:e88/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:a1:17:41:be brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 6e:c6:9c:0f:ab:55 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::6cc6:9cff:fe0f:ab55/64 scope link
       valid_lft forever preferred_lft forever
6: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether 4a:66:88:71:56:6a brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.1/24 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::4866:88ff:fe71:566a/64 scope link
       valid_lft forever preferred_lft forever
7: veth0ded1d29@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
    link/ether 22:c2:6b:c7:cc:7a brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::20c2:6bff:fec7:cc7a/64 scope link
       valid_lft forever preferred_lft forever

端点:

ipcheck            10.244.1.21:8080   51m
kubernetes         192.168.2.1:6443   9d

1 个答案:

答案 0 :(得分:0)

我希望您能够在内部使用clusterip进行卷曲 http://10.111.7.129:70>

似乎端口未打开。尝试在虚拟盒级别打开端口30000 /如果使用AKS或IBM Cloud在安全组上打开端口。

然后使用

谢谢 VB