我想授予Kubernetes服务帐户执行kubectl --token $token get pod --all-namespaces
的特权。我熟悉对单个名称空间执行此操作,但不知道如何对所有名称空间执行此操作(包括将来可能会创建且未授予服务帐户full admin privileges的新名称空间)。
当前我收到此错误消息:
来自服务器的错误(禁止):禁止使用pods:用户 “ system:serviceaccount:kube-system:test”无法列出资源 群集范围内的API组“”中的“ pods”
需要哪些(群集)角色和角色绑定?
更新通过以下view
向服务分配角色ClusterRoleBinding
是可行的,并且是向前迈出的一步。但是,我想将服务帐户的权限限制到所需的最低限度。
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test
subjects:
- kind: ServiceAccount
name: test
namespace: kube-system
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
可以如下提取服务帐户的令牌:
secret=$(kubectl get serviceaccount test -n kube-system -o=jsonpath='{.secrets[0].name}')
token=$(kubectl get secret $secret -n kube-system -o=jsonpath='{.data.token}' | base64 --decode -)
答案 0 :(得分:2)
ClustRole
和ClusterRoleBinding
是正确的,当您需要所有名称空间时,只需缩小权限即可:
kind: ServiceAccount
apiVersion: v1
metadata:
name: all-ns-pod-get
namespace: your-ns
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: all-ns-pod-get
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: all-ns-pod-get
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: all-ns-pod-get
subjects:
- kind: ServiceAccount
name: all-ns-pod-get
然后,命名空间your-ns
中的所有Pod将自动安装一个k8s令牌。您可以在吊舱内使用裸kubectl或k8s sdk,而无需传递任何秘密。请注意,您不需要需要传递--token
,只需在创建该ServiceAccount的名称空间内的pod中运行命令即可。
这是一篇很好的文章,解释了https://medium.com/@ishagirdhar/rbac-in-kubernetes-demystified-72424901fcb3的概念
答案 1 :(得分:1)
apiVersion: v1
kind: ServiceAccount
metadata:
name: test
namespace: default
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test
subjects:
- kind: ServiceAccount
name: test
namespace: default
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
从以下示例中部署测试包
apiVersion: v1
kind: Pod
metadata:
labels:
run: test
name: test
spec:
serviceAccountName: test
containers:
- args:
- sleep
- "10000"
image: alpine
imagePullPolicy: IfNotPresent
name: test
resources:
requests:
memory: 100Mi
kubectl exec test apk add curl
kubectl exec test -- curl -o /bin/kubectl https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl
kubectl exec test -- sh -c 'chmod +x /bin/kubectl'
master $ kubectl exec test -- sh -c 'kubectl get pods --all-namespaces'
NAMESPACE NAME READY STATUS RESTARTS AGE
app1 nginx-6f858d4d45-m2w6f 1/1 Running 0 19m
app1 nginx-6f858d4d45-rdvht 1/1 Running 0 19m
app1 nginx-6f858d4d45-sqs58 1/1 Running 0 19m
app1 test 1/1 Running 0 18m
app2 nginx-6f858d4d45-6rrfl 1/1 Running 0 19m
app2 nginx-6f858d4d45-djz4b 1/1 Running 0 19m
app2 nginx-6f858d4d45-mvscr 1/1 Running 0 19m
app3 nginx-6f858d4d45-88rdt 1/1 Running 0 19m
app3 nginx-6f858d4d45-lfjx2 1/1 Running 0 19m
app3 nginx-6f858d4d45-szfdd 1/1 Running 0 19m
default test 1/1 Running 0 6m
kube-system coredns-78fcdf6894-g7l6n 1/1 Running 0 33m
kube-system coredns-78fcdf6894-r87mx 1/1 Running 0 33m
kube-system etcd-master 1/1 Running 0 32m
kube-system kube-apiserver-master 1/1 Running 0 32m
kube-system kube-controller-manager-master 1/1 Running 0 32m
kube-system kube-proxy-vnxb7 1/1 Running 0 33m
kube-system kube-proxy-vwt6z 1/1 Running 0 33m
kube-system kube-scheduler-master 1/1 Running 0 32m
kube-system weave-net-d5dk8 2/2 Running 1 33m
kube-system weave-net-qjt76 2/2 Running 1 33m