我正在创建一个小型应用程序,它将通过 API 侦听各种资源的更改。对于这样的任务,它需要权限。所以我想我会创建一个 ClusterRole
{{- if not .Values.skipRole }}
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "kubewatcher.serviceAccountName" . }}
rules:
- apiGroups:
- ""
resources:
- pods
- events
- namespaces
- services
- deployments
- replicationcontrollers
- replicasets
- daemonsets
- persistentvolumes
verbs:
- list
- watch
- get
- apiGroups:
- batch
resources:
- cronjobs
- jobs
verbs:
- get
- list
- watch
{{- end }}
我还创建了一个 ServiceAccount
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "kubewatcher.serviceAccountName" . }}
labels:
{{- include "kubewatcher.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
最后是 ClusterRoleBinding
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "kubewatcher.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ include "kubewatcher.fullname" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: {{ include "kubewatcher.serviceAccountName" . }}
apiGroup: rbac.authorization.k8s.io
我的应用程序可以与 API 交互,而且一切似乎都很好。但是,当我安装我的应用程序的另一个实例时,正如我在进一步开发它时所做的那样,我收到以下错误消息。
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRoleBinding "kubewatcher" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "kubewatcher-dev": current value is "kubewatcher"
使用以下命令安装第二个实例,我认为 --set skipRole=true
可以让我绑定到已经存在的 ClusterRole
helm install kubewatcher --namespace kubewatcher-dev helm/ --set skipRole=true
我在正确的道路上吗?有没有更好的办法?我试图发布我的代码的相关部分,请让我知道是否应该发布其他部分