我正在使用Amazon EKS进行Kubernetes部署(最初由AWS管理员用户创建),并且目前难以使用来自AWS STS假设角色的AWS凭证来执行kubectl
命令以与堆栈进行交互
我在2个不同的AWS账户(PROD和NONPROD)上有2个EKS堆栈,我正在尝试使用CI / CD工具将其部署到两个具有AWS STS提供的凭据的kubernetes堆栈中,但我想m不断收到诸如error: You must be logged in to the server (the server has asked for the client to provide credentials)
之类的错误。
我点击了以下链接,将其他AWS IAM角色添加到配置中:
但是我不确定我做错了什么。
我运行“ aws eks update-kubeconfig”以更新本地.kube / config文件,其内容填充如下:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: [hidden]
server: https://[hidden].eu-west-1.eks.amazonaws.com
name: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
contexts:
- context:
cluster: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
user: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
name: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
current-context: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- triage-eks
command: aws-iam-authenticator
,并且以前用以下附加角色更新了Kubernetes aws-auth ConfigMap:
data:
mapRoles: |
- rolearn: arn:aws:iam::[hidden]:role/ci_deployer
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:masters
我的CI / CD EC2实例可以为两个AWS账户担任ci_deployer
角色。
预期:我可以调用“ kubectl版本”来查看客户端和服务器版本
实际:但是我收到“服务器已要求客户端提供凭据”
还缺少什么?
经过进一步测试后,我可以确认kubectl仅在创建EKS堆栈的相同AWS账户的环境(例如,我的具有AWS实例角色的CI EC2实例)中工作。这意味着,即使CI实例可以承担帐户B的角色,并且帐户B的角色包含在kube配置的aws-auth中,帐户A的CI实例也将无法与帐户B的EKS通信。帐户B EKS。我希望它是由于缺少配置所致,因为如果CI工具无法使用角色假设从多个AWS账户部署到多个EKS,那将是我不希望的。
期待对此有进一步的@Kubernetes支持
答案 0 :(得分:1)
来自Step 1: Create Your Amazon Cluster
创建Amazon EKS集群后,创建集群的IAM实体(用户或角色)将以管理员身份(具有system:master权限)添加到Kubernetes RBAC授权表中。最初,只有该IAM用户可以拨打电话使用kubectl连接到Kubernetes API服务器。
您已经发现,您只能以最初创建EKS集群的用户/角色访问该集群。
有一种方法可以在创建后通过编辑已创建的aws-auth ConfigMap向集群添加其他角色。
通过编辑aws-auth ConfigMap,您可以根据用户角色添加不同级别的访问。
首先,您必须具有“ system:node:{{EC2PrivateDNSName}}”用户
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
这是Kubernetes正常工作所必需的,它使节点能够加入集群。 “实例角色的ARN”是包含所需策略AmazonEKSWorkerNodePolicy,AmazonEKS_CNI_Policy,AmazonEC2ContainerRegistryReadOnly等的角色。
在下面添加您的角色
- rolearn: arn:aws:iam::[hidden]:role/ci_deployer
username: ci-deployer
groups:
- system:masters
“用户名”实际上可以设置为任何值。仅在将自定义角色和绑定添加到您的EKS群集中时,这才显得很重要。
此外,使用命令“ aws sts get-caller-identity”验证环境/ shell,并正确配置AWS凭证。正确配置后,“ get-caller-identity”应返回aws-auth中指定的相同角色ARN。
答案 1 :(得分:0)
Looks like it's credentials related. Some things that you can look at:
The credential environment variables are not set in your CI?:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Your ~/.aws/credentials file is not populated correctly in your CI. With something like this:
[default]
aws_access_key_id = xxxx
aws_secret_access_key = xxxx
Generally, the environment variables take precedence so it could be that you could have different credentials altogether in those environment variables too.
It could also be the AWS_PROFILE
env variable or the AWS_PROFILE
config in ~/.kube/config
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "<cluster-name>"
# - "-r"
# - "<role-arn>"
# env:
# - name: AWS_PROFILE <== is this value set
# value: "<aws-profile>"
Is the profile set correctly under ~/.aws/config
?