kuberntes入口AWS部署负载均衡器待处理

时间:2019-03-25 16:57:58

标签: amazon-ec2 kubernetes amazon-elb kubernetes-ingress

总之,这些是我已经完成的步骤:

  1. 在AWS中启动了 2 个新的t3 - small实例,并预先标记了密钥 kubernetes.io/cluster/<cluster-name>和值member

  2. 使用相同的标签标记了新的安全性,并打开了所有提及的端口 这里 - https://kubernetes.io/docs/setup/independent/install-kubeadm/#check-required-ports

  3. hostname更改为curl http://169.254.169.254/latest/meta-data/local-hostname的输出并进行了验证 与hostnamectl

  4. 重新启动

  5. 配置的AWS https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html

  6. 创建具有完全(IAM role)权限的"*"并分配给EC2 实例。

  7. 使用kubelet kubeadm kubectl

  8. 安装了apt-get
  9. 使用内容创建了/etc/default/kubelet KUBELET_EXTRA_ARGS=--cloud-provider=aws

  10. 在一个实例上运行kubeadm init --pod-network-cidr=10.244.0.0/16 并将输出用于kubeadm join ...个其他节点。

  11. 已安装Helm

  12. ingress controller上安装了默认后端。

以前,我已经尝试了上述步骤,但是根据kubernetes.github.io上的说明安装了入口。两者都以与EXTERNAL-IP相同的状态<pending>结束。


当前状态为:

kubectl get pods --all-namespaces -o wide

NAMESPACE     NAME                                                                   IP              NODE                                           
ingress       ingress-nginx-ingress-controller-77d989fb4d-qz4f5                      10.244.1.13     ip-YYY-YY-Y-YYY.ap-south-1.compute.internal               
ingress       ingress-nginx-ingress-default-backend-7f7bf55777-dhj75                 10.244.1.12     ip-YYY-YY-Y-YYY.ap-south-1.compute.internal               
kube-system   coredns-86c58d9df4-bklt8                                               10.244.1.14     ip-YYY-YY-Y-YYY.ap-south-1.compute.internal               
kube-system   coredns-86c58d9df4-ftn8q                                               10.244.1.16     ip-YYY-YY-Y-YYY.ap-south-1.compute.internal               
kube-system   etcd-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal                      172.31.12.119   ip-XXX-XX-XX-XXX.ap-south-1.compute.internal              
kube-system   kube-apiserver-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal            172.31.12.119   ip-XXX-XX-XX-XXX.ap-south-1.compute.internal              
kube-system   kube-controller-manager-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal   172.31.12.119   ip-XXX-XX-XX-XXX.ap-south-1.compute.internal              
kube-system   kube-flannel-ds-amd64-87k8p                                            172.31.12.119   ip-XXX-XX-XX-XXX.ap-south-1.compute.internal              
kube-system   kube-flannel-ds-amd64-f4wft                                            172.31.3.106    ip-YYY-YY-Y-YYY.ap-south-1.compute.internal               
kube-system   kube-proxy-79cp2                                                       172.31.3.106    ip-YYY-YY-Y-YYY.ap-south-1.compute.internal               
kube-system   kube-proxy-sv7md                                                       172.31.12.119   ip-XXX-XX-XX-XXX.ap-south-1.compute.internal              
kube-system   kube-scheduler-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal            172.31.12.119   ip-XXX-XX-XX-XXX.ap-south-1.compute.internal              
kube-system   tiller-deploy-5b7c66d59c-fgwcp                                         10.244.1.15     ip-YYY-YY-Y-YYY.ap-south-1.compute.internal  

kubectl get svc --all-namespaces -o wide

NAMESPACE     NAME                                    TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE   SELECTOR
default       kubernetes                              ClusterIP      10.96.0.1        <none>        443/TCP                      73m   <none>
ingress       ingress-nginx-ingress-controller        LoadBalancer   10.97.167.197    <pending>     80:32722/TCP,443:30374/TCP   59m   app=nginx-ingress,component=controller,release=ingress
ingress       ingress-nginx-ingress-default-backend   ClusterIP      10.109.198.179   <none>        80/TCP                       59m   app=nginx-ingress,component=default-backend,release=ingress
kube-system   kube-dns                                ClusterIP      10.96.0.10       <none>        53/UDP,53/TCP                73m   k8s-app=kube-dns
kube-system   tiller-deploy                           ClusterIP      10.96.216.119    <none>        44134/TCP                    67m   app=helm,name=tiller

kubectl describe service -n ingress ingress-nginx-ingress-controller
Name:                     ingress-nginx-ingress-controller
Namespace:                ingress
Labels:                   app=nginx-ingress
                          chart=nginx-ingress-1.4.0
                          component=controller
                          heritage=Tiller
                          release=ingress
Annotations:              service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: *
Selector:                 app=nginx-ingress,component=controller,release=ingress
Type:                     LoadBalancer
IP:                       10.104.55.18
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  32318/TCP
Endpoints:                10.244.1.20:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  32560/TCP
Endpoints:                10.244.1.20:443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

附加的IAM角色串联策略

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "*",
            "Resource": "*"
        }
    ]
}

kubectl获取-o宽的节点

NAME                                           STATUS   ROLES    AGE     VERSION   INTERNAL-IP     EXTERNAL-IP     OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
ip-172-31-12-119.ap-south-1.compute.internal   Ready    master   6d19h   v1.13.4   172.31.12.119   XX.XXX.XXX.XX   Ubuntu 16.04.5 LTS   4.4.0-1077-aws   docker://18.6.3
ip-172-31-3-106.ap-south-1.compute.internal    Ready    <none>   6d19h   v1.13.4   172.31.3.106    XX.XXX.XX.XXX   Ubuntu 16.04.5 LTS   4.4.0-1077-aws   docker://18.6.3

有人可以指出我在这里想念的是什么,因为互联网上到处都有Classic ELB会自动部署?

1 个答案:

答案 0 :(得分:1)

对于AWS ELB(经典类型),您必须

  1. 在kube服务清单中明确指定--cloud-provider=aws 位于主节点上的/etc/kubernetes/manifests中:

    kube-controller-manager.yaml kube-apiserver.yaml

  2. 重新启动服务:

    sudo systemctl daemon-reload

    sudo systemctl restart kubelet


与其他命令一起,根据需要添加在底部或顶部。结果应类似于:

kube-controller-manager.yaml

spec:
  containers:
  - command:
    - kube-controller-manager
    - --cloud-provider=aws

kube-apiserver.yaml

spec:
  containers:
  - command:
    - kube-apiserver
    - --cloud-provider=aws