AWS上的ALB入口控制器

时间:2020-01-10 18:14:52

标签: amazon-web-services kubernetes-ingress amazon-eks aws-eks eks

我正尝试在AWS-EKS上设置ALB入口控制器,完全按照以下教程所述:ingress_controller_alb,但我无法获得入口地址。

的确,如果我运行以下命令:kubectl get ingress/2048-ingress -n 2048-game,则10分钟后我没有地址。有想法吗?

4 个答案:

答案 0 :(得分:0)

您使用的aws-controller版本可能存在问题-您使用的是Ingress Controller的旧版本-1.0.0,新的是1.1.3。

我建议您阅读以下文档:ingress-controller-alb

1。。下载示例ALB入口控制器清单

wget https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/alb-ingress-controller.yaml

2。。配置ALB入口控制器清单

至少编辑以下变量:

--cluster-name=devCluster: name of the cluster. AWS resources will be tagged with kubernetes.io/cluster/devCluster:owned

如果从控制器窗格中无法使用ec2metadata,请编辑以下变量:

--aws-vpc-id=vpc-xxxxxx: vpc ID of the cluster.
--aws-region=us-west-1: AWS region of the cluster.

3。。部署RBAC角色清单

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/rbac-role.yaml

4。。部署ALB入口控制器清单

kubectl apply -f alb-ingress-controller.yaml

5。。验证部署是否成功并且控制器已启动

kubectl logs -n kube-system $(kubectl get po -n kube-system | egrep -o "alb-ingress[a-zA-Z0-9-]+")

您应该能够显示类似于以下内容的输出:

-------------------------------------------------------------------------------
AWS ALB Ingress controller
Release:    1.0.0
Build:      git-7bc1850b
Repository: https://github.com/kubernetes-sigs/aws-alb-ingress-controller.git
-------------------------------------------------------------------------------

然后您可以部署示例应用程序

执行以下命令:

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/2048/2048-namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/2048/2048-deployment.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/2048/2048-service.yaml

为2048游戏部署Ingress资源:

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/2048/2048-ingress.yaml

几秒钟后,验证是否已启用Ingress资源:

kubectl get ingress/2048-ingress -n 2048-game

答案 1 :(得分:0)

我在同样的问题上苦苦挣扎,但是在遵循上面的@MaggieO步骤之后终于使它工作了。需要考虑的几件事:

  1. 将公用和专用子网添加到您的EKS群集。确保您的公共子网标记为“ kubernetes.io/role/elb”:“1”。如果创建受管节点组,请仅选择专用子网来放置工作节点。
  2. 确保您对工作节点的IAM角色具有以下策略:AmazonEKSWorkerNodePolicy,AmazonEC2ContainerRegistryReadOnly,AmazonEKS_CNI_Policy,以及在https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.2/docs/examples/iam-policy.json此处定义的自定义策略。
  3. 检查您的入口控制器日志,它们很有帮助。

    kubectl日志-n kube-system [入口控制器的名称]

答案 2 :(得分:0)

感谢您的答复!

我认为问题在于创建集群时,使用命令eksctl cluster create -f cluster.yaml可以创建没有EC2实例的集群

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: test
  region: eu-central-1
  version: "1.14"
vpc:
  id: vpc-50b17738
  subnets:
    private:
      eu-central-1a: { id: subnet-aee763c6 }
      eu-central-1b: { id: subnet-bc2ee6c6 }
      eu-central-1c: { id: subnet-24734d6e }
nodeGroups:
  - name: ng-1-workers
    labels: { role: workers }
    instanceType: t3.medium
    desiredCapacity: 2
    volumeSize: 5
    privateNetworking: true

我尝试使用节点组和受管节点组,但是出现以下超时错误:

...
[ℹ]  nodegroup "ng-1-workers" has 0 node(s)
[ℹ]  waiting for at least 2 node(s) to become ready in "ng-1-workers"
Error: timed out (after 25m0s) waiting for at least 2 nodes to join the cluster and become ready in "ng-1-workers"

答案 3 :(得分:0)

如果成功创建contoller,则会找到以下控制器:

$ kubectl get po -n kube-system | grep alb
alb-ingress-controller-669b958f64-p69fw               1/1     Running   0          3m7s

及其日志:

$ kubectl logs -n kube-system $(kubectl get po -n kube-system | egrep -o alb-ingress[a-zA-Z0-9-]+)
-------------------------------------------------------------------------------
AWS ALB Ingress controller
  Release:    v1.1.8
  Build:      git-ec387ad1
  Repository: https://github.com/kubernetes-sigs/aws-alb-ingress-controller.git
-------------------------------------------------------------------------------

W0720 13:31:21.242868       1 client_config.go:549] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.