我正在尝试使用Terraform资源创建具有ALB入口的AWS EKS集群。
This document表示入口将自动创建具有关联的侦听器和目标组的负载平衡器。
Kubernetes入口会创建ALB负载平衡器,安全组和规则,但不会创建目标组或侦听器。我尝试使用网关或应用程序子网,但是没有区别。我尝试设置安全组,但设置了ALB,并使用了自己的自我管理安全组。
我依靠this guide
对ALB的迷恋让我
无法连接到 de59ecbf-default-mainingre-8687-1051686593.ap-southeast-1.elb.amazonaws.com 端口80:连接被拒绝
我分别创建了IAM角色和ACM证书,因为AWS对此有配额限制。我在EKS群集和节点上的角色是标准角色,并且节点角色附加了最新策略。
我使用kubectl
分别应用kubernetes入口,但结果相同。它会创建ALB和一个具有端口规则的安全组,但没有目标组或侦听器。
将群集端点从aws eks describe-cluster --name my-tf-eks-cluster --query "cluster.endpoint"
粘贴到浏览器中时,我得到了:
{“种类”:“状态”,“ apiVersion”:“ v1”,“元数据”:{ },“状态”:“失败”,“消息”:“禁止:用户” system:anonymous“无法获取路径” /“”,“原因”:“禁止”, “细节”: { },“代码”:403}
此外,入口没有IP地址。
kubectl describe ingresses
Name: main-ingress
Namespace: default
Address:
Default backend: go-hello-world:8080 (<none>)
Rules:
Host Path Backends
---- ---- --------
* * go-hello-world:8080 (<none>)
aws eks describe-cluster --name my-tf-eks-cluster --query cluster.endpoint"
"https://88888888B.gr7.ap-southeast-1.eks.amazonaws.com"
curl https://88888888B.gr7.ap-southeast-1.eks.amazonaws.com
curl: (60) SSL certificate problem: unable to get local issuer certificate
编辑:IAM群集策略缺少这些权限。我已经决定最好使用ELB,因为它们可以终止ssl证书,然后将traefik用作后端代理,所以我现在不能真正对其进行测试。任何人都可以确认ALB是否需要这些权限吗?
"elasticloadbalancing:DescribeListenerCertificates",
"elasticloadbalancing:AddListenerCertificates",
"elasticloadbalancing:RemoveListenerCertificates"
这是我的EKS主资源:
data "aws_iam_role" "tf-eks-master" {
name = "terraform-eks-cluster"
}
resource "aws_eks_cluster" "tf_eks" {
name = var.cluster_name
role_arn = data.aws_iam_role.tf-eks-master.arn
vpc_config {
security_group_ids = [aws_security_group.master.id]
subnet_ids = var.application_subnet_ids
endpoint_private_access = true
endpoint_public_access = true
}
}
ALB入口控制器:
output "vpc_id" {
value = data.aws_vpc.selected
}
data "aws_subnet_ids" "selected" {
vpc_id = data.aws_vpc.selected.id
tags = map(
"Name", "application",
)
}
resource "kubernetes_deployment" "alb-ingress" {
metadata {
name = "alb-ingress-controller"
labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
namespace = "kube-system"
}
spec {
selector {
match_labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
}
template {
metadata {
labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
}
spec {
volume {
name = kubernetes_service_account.alb-ingress.default_secret_name
secret {
secret_name = kubernetes_service_account.alb-ingress.default_secret_name
}
}
container {
# This is where you change the version when Amazon comes out with a new version of the ingress controller
image = "docker.io/amazon/aws-alb-ingress-controller:v1.1.8"
name = "alb-ingress-controller"
args = [
"--ingress-class=alb",
"--cluster-name=${var.cluster_name}",
"--aws-vpc-id=${data.aws_vpc.selected.id}",
"--aws-region=${var.aws_region}"
]
volume_mount {
name = kubernetes_service_account.alb-ingress.default_secret_name
mount_path = "/var/run/secrets/kubernetes.io/serviceaccount"
read_only = true
}
}
service_account_name = "alb-ingress-controller"
}
}
}
}
resource "kubernetes_service_account" "alb-ingress" {
metadata {
name = "alb-ingress-controller"
namespace = "kube-system"
labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
}
automount_service_account_token = true
}
kubernetes_ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/scheme: "internet-facing"
alb.ingress.kubernetes.io/target-type: "ip"
alb.ingress.kubernetes.io/subnets: 'subnet-0ab65d9cec9451287, subnet-034bf8856ab9157b7, subnet-0c16b1d382fadd0b4'
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80},{"HTTPS": 443}]'
spec:
backend:
serviceName: go-hello-world
servicePort: 8080
角色
resource "kubernetes_cluster_role" "alb-ingress" {
metadata {
name = "alb-ingress-controller"
labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
}
rule {
api_groups = ["", "extensions"]
resources = ["configmaps", "endpoints", "events", "ingresses", "ingresses/status", "services"]
verbs = ["create", "get", "list", "update", "watch", "patch"]
}
rule {
api_groups = ["", "extensions"]
resources = ["nodes", "pods", "secrets", "services", "namespaces"]
verbs = ["get", "list", "watch"]
}
}
resource "kubernetes_cluster_role_binding" "alb-ingress" {
metadata {
name = "alb-ingress-controller"
labels = {
"app.kubernetes.io/name" = "alb-ingress-controller"
}
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "alb-ingress-controller"
}
subject {
kind = "ServiceAccount"
name = "alb-ingress-controller"
namespace = "kube-system"
}
}
VPC中的某些代码
data "aws_availability_zones" "available" {}
resource "aws_subnet" "gateway" {
count = var.subnet_count
availability_zone = data.aws_availability_zones.available.names[count.index]
cidr_block = "10.0.1${count.index}.0/24"
vpc_id = aws_vpc.tf_eks.id
tags = map(
"Name", "gateway",
)
}
resource "aws_subnet" "application" {
count = var.subnet_count
availability_zone = data.aws_availability_zones.available.names[count.index]
cidr_block = "10.0.2${count.index}.0/24"
vpc_id = aws_vpc.tf_eks.id
tags = map(
"Name", "application",
"kubernetes.io/cluster/${var.cluster_name}", "shared",
"kubernetes.io/role/elb", "1",
)
}
resource "aws_subnet" "database" {
count = var.subnet_count
availability_zone = data.aws_availability_zones.available.names[count.index]
cidr_block = "10.0.3${count.index}.0/24"
vpc_id = aws_vpc.tf_eks.id
tags = map(
"Name", "database"
)
}
resource "aws_route_table" "application" {
count = var.subnet_count
vpc_id = aws_vpc.tf_eks.id
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.tf_eks.*.id[count.index]
}
tags = {
Name = "application"
}
}
resource "aws_route_table" "database" {
vpc_id = aws_vpc.tf_eks.id
tags = {
Name = "database"
}
}
resource "aws_route_table" "gateway" {
vpc_id = aws_vpc.tf_eks.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.tf_eks.id
}
tags = {
Name = "gateway"
}
}
resource "aws_route_table_association" "application" {
count = var.subnet_count
subnet_id = aws_subnet.application.*.id[count.index]
route_table_id = aws_route_table.application.*.id[count.index]
}
resource "aws_route_table_association" "database" {
count = var.subnet_count
subnet_id = aws_subnet.database.*.id[count.index]
route_table_id = aws_route_table.database.id
}
resource "aws_route_table_association" "gateway" {
count = var.subnet_count
subnet_id = aws_subnet.gateway.*.id[count.index]
route_table_id = aws_route_table.gateway.id
}
resource "aws_internet_gateway" "tf_eks" {
vpc_id = aws_vpc.tf_eks.id
tags = {
Name = "internet_gateway"
}
}
resource "aws_eip" "nat_gateway" {
count = var.subnet_count
vpc = true
}
resource "aws_nat_gateway" "tf_eks" {
count = var.subnet_count
allocation_id = aws_eip.nat_gateway.*.id[count.index]
subnet_id = aws_subnet.gateway.*.id[count.index]
tags = {
Name = "nat_gateway"
}
depends_on = [aws_internet_gateway.tf_eks]
}
安全组
resource "aws_security_group" "eks" {
name = "tf-eks-master"
description = "Cluster communication with worker nodes"
vpc_id = var.vpc_id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "node" {
name = "tf-eks-node"
description = "Security group for all nodes in the cluster"
vpc_id = var.vpc_id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group_rule" "main-node-ingress-self" {
type = "ingress"
description = "Allow node to communicate with each other"
from_port = 0
protocol = "-1"
security_group_id = aws_security_group.node.id
to_port = 65535
cidr_blocks = var.subnet_cidrs
}
resource "aws_security_group_rule" "main-node-ingress-cluster" {
type = "ingress"
description = "Allow worker Kubelets and pods to receive communication from the cluster control plane"
from_port = 1025
protocol = "tcp"
security_group_id = aws_security_group.node.id
source_security_group_id = aws_security_group.eks.id
to_port = 65535
}
kubectl获取所有--all-namespaces
kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/go-hello-world-68545f84bc-5st4s 1/1 Running 0 35s
default pod/go-hello-world-68545f84bc-bkwpb 1/1 Running 0 35s
default pod/go-hello-world-68545f84bc-kmfbq 1/1 Running 0 35s
kube-system pod/alb-ingress-controller-5f9cb4b7c4-w858g 1/1 Running 0 2m7s
kube-system pod/aws-node-8jfkf 1/1 Running 0 67m
kube-system pod/aws-node-d7s7w 1/1 Running 0 67m
kube-system pod/aws-node-termination-handler-g5fmj 1/1 Running 0 67m
kube-system pod/aws-node-termination-handler-q5tz5 1/1 Running 0 67m
kube-system pod/aws-node-termination-handler-tmzmr 1/1 Running 0 67m
kube-system pod/aws-node-vswpf 1/1 Running 0 67m
kube-system pod/coredns-5c4dd4cc7-sk474 1/1 Running 0 71m
kube-system pod/coredns-5c4dd4cc7-zplwg 1/1 Running 0 71m
kube-system pod/kube-proxy-5m9dn 1/1 Running 0 67m
kube-system pod/kube-proxy-8tn9l 1/1 Running 0 67m
kube-system pod/kube-proxy-qs652 1/1 Running 0 67m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 71m
kube-system service/kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 71m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/aws-node 3 3 3 3 3 <none> 71m
kube-system daemonset.apps/aws-node-termination-handler 3 3 3 3 3 <none> 68m
kube-system daemonset.apps/kube-proxy 3 3 3 3 3 <none> 71m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default deployment.apps/go-hello-world 3/3 3 3 37s
kube-system deployment.apps/alb-ingress-controller 1/1 1 1 2m9s
kube-system deployment.apps/coredns 2/2 2 2 71m
NAMESPACE NAME DESIRED CURRENT READY AGE
default replicaset.apps/go-hello-world-68545f84bc 3 3 3 37s
kube-system replicaset.apps/alb-ingress-controller-5f9cb4b7c4 1 1 1 2m9s
kube-system replicaset.apps/coredns-5c4dd4cc7 2 2
答案 0 :(得分:0)
没有测试它,但我要指出我在遍历您的资源时遇到的问题。
在alb资源args中,您具有以下内容:
"--aws-vpc-id=${data.aws_vpc.selected.id}",
但是,您没有任何数据资源来提取此VPC。 此外,数据资源可以不受任何依赖地运行,并且不会提取有关尚未创建的VPC的任何信息。
如果已对Terraform进行了调制,请从VPC模块输出VPC ID并使用它。如果所有资源都在同一文件/文件夹中,请直接在以下行中直接引用它:aws_vpc.tf_eks.id
答案 1 :(得分:0)
您可以尝试添加这些行并尝试使用 kubectl 命令吗
# ALB's Target Group Configurations
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS
仍然没有创建目标组,检查控制器的日志
kubectl logs -n kube-system deployment.apps/aws-load-balancer-controller