执行摘要
Jenkins 在 Kubernetes 集群中运行,刚刚升级到 1.19.7 但现在运行时 jenkins 构建脚本失败
sh "kubectl --kubeconfig ${args.config} config use-context ${args.context}"
报错
io.fabric8.kubernetes.client.KubernetesClientException: Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy)
但是我应该更改哪些权限或角色?
这里有更多细节
Jenkins 在 Kubernetes 集群中作为主节点运行,它获取 GIT 作业,然后创建也应该在同一集群中运行的从节点 pod。我们在集群中有一个名为“Jenkins”的命名空间。当您使用 Jenkins 在自己的容器中创建微服务应用程序的构建时,然后提示通过测试、演示和生产管道部署这些应用程序。
集群已使用 kops 更新到 Kubernetes 1.19.7。一切仍然可以正常部署、运行和访问。对于用户来说,您不会认为集群内部运行的应用程序有问题;所有这些都可以通过浏览器访问,并且 PODS 没有显示出重大问题。
Jenkins 仍然可以访问(运行版本 2.278,使用 Kubernetes 插件 1.29.1,Kubernetes 凭据 0.8.0,Kubernetes 客户端 API 插件 4.13.2-1)
我可以登录 Jenkins,查看我通常希望看到的所有内容
我可以使用 LENS 连接到集群并正常查看所有节点、pod 等。
然而,这就是我们在 1.19.7 升级后的问题所在,当 Jenkins 作业开始时,它现在总是在尝试设置 kubectl 上下文时失败
我们在同一个地方的每个构建管道中都会遇到这个错误...
[Pipeline] load
[Pipeline] { (JenkinsUtil.groovy)
[Pipeline] }
[Pipeline] // load
[Pipeline] stage
[Pipeline] { (Set-Up and checks)
[Pipeline] withCredentials
Masking supported pattern matches of $KUBECONFIG or $user or $password
[Pipeline] {
[Pipeline] container
[Pipeline] {
[Pipeline] sh
Warning: A secret was passed to "sh" using Groovy String interpolation, which is insecure.
Affected argument(s) used the following variable(s): [KUBECONFIG, user]
See https://****.io/redirect/groovy-string-interpolation for details.
java.net.ProtocolException: Expected HTTP 101 response but was '403 Forbidden'
at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:229)
at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:196)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] echo
io.fabric8.kubernetes.client.KubernetesClientException: Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy)
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
[Bitbucket] Notifying commit build result
[Bitbucket] Build result notified
现在我认为这与安全有关......但我不确定要更改什么
我可以看到它正在使用 system:anonymous,这可能在更高的 Kubernetes 版本中受到限制,但我不确定如何提供另一个用户或允许它在此命名空间中的 Jenkins 主节点上工作。>
当我们运行 jenkins 并部署 jenkins 时,我可以看到以下服务帐户
kind: ServiceAccount
apiVersion: v1
metadata:
name: jenkins
namespace: jenkins
selfLink: /api/v1/namespaces/jenkins/serviceaccounts/jenkins
uid: a81a479a-b525-4b01-be39-4445796c6eb1
resourceVersion: '94146677'
creationTimestamp: '2020-08-20T13:32:35Z'
labels:
app: jenkins-master
app.kubernetes.io/managed-by: Helm
chart: jenkins-acme-2.278.102
heritage: Helm
release: jenkins-acme-v2
annotations:
meta.helm.sh/release-name: jenkins-acme-v2
meta.helm.sh/release-namespace: jenkins
secrets:
- name: jenkins-token-lqgk5
还有
kind: ServiceAccount
apiVersion: v1
metadata:
name: jenkins-deployer
namespace: jenkins
selfLink: /api/v1/namespaces/jenkins/serviceaccounts/jenkins-deployer
uid: 4442ec9b-9cbd-11e9-a350-06cfb66a82f6
resourceVersion: '2157387'
creationTimestamp: '2019-07-02T11:33:51Z'
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"jenkins-deployer","namespace":"jenkins"}}
secrets:
- name: jenkins-deployer-token-mdfq9
以及以下角色
詹金斯角色
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{"meta.helm.sh/release-name":"jenkins-acme-v2","meta.helm.sh/release-namespace":"jenkins"},"creationTimestamp":"2020-08-20T13:32:35Z","labels":{"app":"jenkins-master","app.kubernetes.io/managed-by":"Helm","chart":"jenkins-acme-2.278.102","heritage":"Helm","release":"jenkins-acme-v2"},"name":"jenkins-role","namespace":"jenkins","selfLink":"/apis/rbac.authorization.k8s.io/v1/namespaces/jenkins/roles/jenkins-role","uid":"de5431f6-d576-4804-b132-6562d0ba7a94"},"rules":[{"apiGroups":["","extensions"],"resources":["*"],"verbs":["*"]},{"apiGroups":[""],"resources":["nodes"],"verbs":["get","list","watch","update"]}]}
meta.helm.sh/release-name: jenkins-acme-v2
meta.helm.sh/release-namespace: jenkins
creationTimestamp: '2020-08-20T13:32:35Z'
labels:
app: jenkins-master
app.kubernetes.io/managed-by: Helm
chart: jenkins-acme-2.278.102
heritage: Helm
release: jenkins-acme-v2
name: jenkins-role
namespace: jenkins
resourceVersion: '94734324'
selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/jenkins/roles/jenkins-role
uid: de5431f6-d576-4804-b132-6562d0ba7a94
rules:
- apiGroups:
- ''
- extensions
resources:
- '*'
verbs:
- '*'
- apiGroups:
- ''
resources:
- nodes
verbs:
- get
- list
- watch
- update
jenkins-deployer-role
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: jenkins-deployer-role
namespace: jenkins
selfLink: >-
/apis/rbac.authorization.k8s.io/v1/namespaces/jenkins/roles/jenkins-deployer-role
uid: 87b6486e-6576-11e8-92a9-06bdf97be268
resourceVersion: '94731699'
creationTimestamp: '2018-06-01T08:33:59Z'
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"creationTimestamp":"2018-06-01T08:33:59Z","name":"jenkins-deployer-role","namespace":"jenkins","selfLink":"/apis/rbac.authorization.k8s.io/v1/namespaces/jenkins/roles/jenkins-deployer-role","uid":"87b6486e-6576-11e8-92a9-06bdf97be268"},"rules":[{"apiGroups":[""],"resources":["pods"],"verbs":["*"]},{"apiGroups":[""],"resources":["deployments","services"],"verbs":["*"]}]}
rules:
- verbs:
- '*'
apiGroups:
- ''
resources:
- pods
- verbs:
- '*'
apiGroups:
- ''
resources:
- deployments
- services
和 jenkins-namespace-manager
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: jenkins-namespace-manager
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/jenkins-namespace-manager
uid: 93e80d54-6346-11e8-92a9-06bdf97be268
resourceVersion: '94733699'
creationTimestamp: '2018-05-29T13:45:41Z'
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"creationTimestamp":"2018-05-29T13:45:41Z","name":"jenkins-namespace-manager","selfLink":"/apis/rbac.authorization.k8s.io/v1/clusterroles/jenkins-namespace-manager","uid":"93e80d54-6346-11e8-92a9-06bdf97be268"},"rules":[{"apiGroups":[""],"resources":["namespaces"],"verbs":["get","watch","list","create"]},{"apiGroups":[""],"resources":["nodes"],"verbs":["get","list","watch","update"]}]}
rules:
- verbs:
- get
- watch
- list
- create
apiGroups:
- ''
resources:
- namespaces
- verbs:
- get
- list
- watch
- update
apiGroups:
- ''
resources:
- nodes
最后是 jenkins-deployer-role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"creationTimestamp":"2018-05-29T13:29:43Z","name":"jenkins-deployer-role","selfLink":"/apis/rbac.authorization.k8s.io/v1/clusterroles/jenkins-deployer-role","uid":"58e1912e-6344-11e8-92a9-06bdf97be268"},"rules":[{"apiGroups":["","extensions","apps","rbac.authorization.k8s.io"],"resources":["*"],"verbs":["*"]},{"apiGroups":["policy"],"resources":["poddisruptionbudgets","podsecuritypolicies"],"verbs":["create","delete","deletecollection","patch","update","use","get"]},{"apiGroups":["","extensions","apps","rbac.authorization.k8s.io"],"resources":["nodes"],"verbs":["get","list","watch","update"]}]}
creationTimestamp: '2018-05-29T13:29:43Z'
name: jenkins-deployer-role
resourceVersion: '94736572'
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/jenkins-deployer-role
uid: 58e1912e-6344-11e8-92a9-06bdf97be268
rules:
- apiGroups:
- ''
- extensions
- apps
- rbac.authorization.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- policy
resources:
- poddisruptionbudgets
- podsecuritypolicies
verbs:
- create
- delete
- deletecollection
- patch
- update
- use
- get
- apiGroups:
- ''
- extensions
- apps
- rbac.authorization.k8s.io
resources:
- nodes
verbs:
- get
- list
- watch
- update
还有以下绑定..
我真的坚持这个,我不想让 system:anonymous 访问所有东西,尽管我猜这可能是一种选择。
帮助构建它的 jenkins 文件是
詹金斯文件
import org.jenkinsci.plugins.workflow.steps.FlowInterruptedException
def label = "worker-${UUID.randomUUID().toString()}"
def dockerRegistry = "id.dkr.ecr.eu-west-1.amazonaws.com"
def localHelmRepository = "acme-helm"
def artifactoryHelmRepository = "https://acme.jfrog.io/acme/$localHelmRepository"
def jenkinsContext = "jenkins-staging"
def MAJOR = 2 // Change HERE
def MINOR = 278 // Change HERE
def PATCH = BUILD_NUMBER
def chartVersion = "X.X.X"
def name = "jenkins-acme"
def projectName = "$name"
def helmPackageName = "$projectName"
def helmReleaseName = "$name-v$MAJOR"
def fullVersion = "$MAJOR.$MINOR.$PATCH"
def jenkinsVersion = "${MAJOR}.${MINOR}" // Gets passed to Dockerfile for getting image from Docker hub
podTemplate(label: label, containers: [
containerTemplate(name: 'docker', image: 'docker:18.05-dind', ttyEnabled: true, privileged: true),
containerTemplate(name: 'perl', image: 'perl', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'kubectl', image: 'lachlanevenson/k8s-kubectl:v1.18.8', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'helm', image: 'id.dkr.ecr.eu-west-1.amazonaws.com/k8s-helm:3.2.0', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'clair-local-scan', image: '738398925563.dkr.ecr.eu-west-1.amazonaws.com/clair-local-scan:latest', ttyEnabled: true, envVars: [envVar(key: 'DOCKER_HOST', value: 'tcp://localhost:2375')]),
containerTemplate(name: 'clair-scanner', image: '738398925563.dkr.ecr.eu-west-1.amazonaws.com/clair-scanner:latest', command: 'cat', ttyEnabled: true, envVars: [envVar(key: 'DOCKER_HOST', value: 'tcp://localhost:2375')]),
containerTemplate(name: 'clair-db', image: "738398925563.dkr.ecr.eu-west-1.amazonaws.com/clair-db:latest", ttyEnabled: true),
containerTemplate(name: 'aws-cli', image: 'mesosphere/aws-cli', command: 'cat', ttyEnabled: true)
], volumes: [
emptyDirVolume(mountPath: '/var/lib/docker')
]) {
try {
node(label) {
def myRepo = checkout scm
jenkinsUtils = load 'JenkinsUtil.groovy'
stage('Set-Up and checks') {
jenkinsContext = 'jenkins-staging'
withCredentials([
file(credentialsId: 'kubeclt-staging-config', variable: 'KUBECONFIG'),
usernamePassword(credentialsId: 'jenkins_artifactory', usernameVariable: 'user', passwordVariable: 'password')]) {
jenkinsUtils.initKubectl(jenkinsUtils.appendToParams("kubectl", [
namespaces: ["jenkins"],
context : jenkinsContext,
config : KUBECONFIG])
)
jenkinsUtils.initHelm(jenkinsUtils.appendToParams("helm", [
namespace : "jenkins",
helmRepo : artifactoryHelmRepository,
username : user,
password : password,
])
)
}
}
stage('docker build and push') {
container('perl'){
def JENKINS_HOST = "jenkins_api:1Ft38erDFjjfM6q3a6y7@jenkins.acme.com"
sh "curl -sSL \"https://${JENKINS_HOST}/pluginManager/api/xml?depth=1&xpath=/*/*/shortName|/*/*/version&wrapper=plugins\" | perl -pe 's/.*?<shortName>([\\w-]+).*?<version>([^<]+)()(<\\/\\w+>)+/\\1 \\2\\n/g'|sed 's/ /:/' > plugins.txt"
sh "cat plugins.txt"
}
container('docker'){
sh "ls -la"
sh "docker version"
// This is because of this annoying "feature" where the command ran from docker contains a \r character which must be removed
sh 'eval $(docker run --rm -t $(tty &>/dev/null && echo "-n") -v "$(pwd):/project" mesosphere/aws-cli ecr get-login --no-include-email --region eu-west-1 | tr \'\\r\' \' \')'
sh "sed \"s/JENKINS_VERSION/${jenkinsVersion}/g\" Dockerfile > Dockerfile.modified"
sh "cat Dockerfile.modified"
sh "docker build -t $name:$fullVersion -f Dockerfile.modified ."
sh "docker tag $name:$fullVersion $dockerRegistry/$name:$fullVersion"
sh "docker tag $name:$fullVersion $dockerRegistry/$name:latest"
sh "docker tag $name:$fullVersion $dockerRegistry/$name:${MAJOR}"
sh "docker tag $name:$fullVersion $dockerRegistry/$name:${MAJOR}.$MINOR"
sh "docker tag $name:$fullVersion $dockerRegistry/$name:${MAJOR}.${MINOR}.$PATCH"
sh "docker push $dockerRegistry/$name:$fullVersion"
sh "docker push $dockerRegistry/$name:latest"
sh "docker push $dockerRegistry/$name:${MAJOR}"
sh "docker push $dockerRegistry/$name:${MAJOR}.$MINOR"
sh "docker push $dockerRegistry/$name:${MAJOR}.${MINOR}.$PATCH"
}
}
stage('helm build') {
namespace = 'jenkins'
jenkinsContext = 'jenkins-staging'
withCredentials([
file(credentialsId: 'kubeclt-staging-config', variable: 'KUBECONFIG'),
usernamePassword(credentialsId: 'jenkins_artifactory', usernameVariable: 'user', passwordVariable: 'password')]) {
jenkinsUtils.setContext(jenkinsUtils.appendToParams("kubectl", [
context: jenkinsContext,
config : KUBECONFIG])
)
jenkinsUtils.helmDeploy(jenkinsUtils.appendToParams("helm", [
namespace : namespace,
credentials: true,
release : helmReleaseName,
args : [replicaCount : 1,
imageTag : fullVersion,
namespace : namespace,
"MajorVersion" : MAJOR]])
)
jenkinsUtils.helmPush(jenkinsUtils.appendToParams("helm", [
helmRepo : artifactoryHelmRepository,
username : user,
password : password,
BuildInfo : BRANCH_NAME,
Commit : "${myRepo.GIT_COMMIT}"[0..6],
fullVersion: fullVersion
]))
}
}
stage('Deployment') {
namespace = 'jenkins'
jenkinsContext = 'jenkins-staging'
withCredentials([
file(credentialsId: 'kubeclt-staging-config', variable: 'KUBECONFIG')]) {
jenkinsUtils.setContext(jenkinsUtils.appendToParams("kubectl", [
context: jenkinsContext,
config : KUBECONFIG])
)
jenkinsUtils.helmDeploy(jenkinsUtils.appendToParams("helm", [
dryRun : false,
namespace : namespace,
package : "${localHelmRepository}/${helmPackageName}",
credentials: true,
release : helmReleaseName,
args : [replicaCount : 1,
imageTag : fullVersion,
namespace : namespace,
"MajorVersion" : MAJOR
]
])
)
}
}
}
} catch (FlowInterruptedException e) {
def reasons = e.getCauses().collect { it.getShortDescription() }.join(",")
println "Interupted. Reason: $reasons"
currentBuild.result = 'SUCCESS'
return
} catch (error) {
println error
throw error
}
}
和常规文件
templateMap = [
"helm" : [
containerName: "helm",
dryRun : true,
namespace : "test",
tag : "xx",
package : "jenkins-acme",
credentials : false,
ca_cert : null,
helm_cert : null,
helm_key : null,
args : [
majorVersion : 0,
replicaCount : 1
]
],
"kubectl": [
containerName: "kubectl",
context : null,
config : null,
]
]
def appendToParams(String templateName, Map newArgs) {
def copyTemplate = templateMap[templateName].clone()
newArgs.each { paramName, paramValue ->
if (paramName.equalsIgnoreCase("args"))
newArgs[paramName].each {
name, value -> copyTemplate[paramName][name] = value
}
else
copyTemplate[paramName] = paramValue
}
return copyTemplate
}
def setContext(Map args) {
container(args.containerName) {
sh "kubectl --kubeconfig ${args.config} config use-context ${args.context}"
}
}
def initKubectl(Map args) {
container(args.containerName) {
sh "kubectl --kubeconfig ${args.config} config use-context ${args.context}"
for (namespace in args.namespaces)
sh "kubectl -n $namespace get pods"
}
}
def initHelm(Map args) {
container(args.containerName) {
// sh "helm init --client-only"
def command = "helm version --short"
// if (args.credentials)
// command = "$command --tls --tls-ca-cert ${args.ca_cert} --tls-cert ${args.helm_cert} --tls-key ${args.helm_key}"
//
// sh "$command --tiller-connection-timeout 5 --tiller-namespace tiller-${args.namespace}"
sh "helm repo add acme-helm ${args.helmRepo} --username ${args.username} --password ${args.password}"
sh "helm repo update"
}
}
def helmDeploy(Map args) {
container(args.containerName) {
sh "helm repo update"
def command = "helm upgrade"
// if (args.credentials)
// command = "$command --tls --tls-ca-cert ${args.ca_cert} --tls-cert ${args.helm_cert} --tls-key ${args.helm_key}"
if (args.dryRun) {
sh "helm lint ${args.package}"
command = "$command --dry-run --debug"
}
// command = "$command --install --tiller-namespace tiller-${args.namespace} --namespace ${args.namespace}"
command = "$command --install --namespace ${args.namespace}"
def setVar = "--set "
args.args.each { key, value -> setVar = "$setVar$key=\"${value.toString().replace(",", "\\,")}\"," }
setVar = setVar[0..-1]
sh "$command $setVar --devel ${args.release} ${args.package}"
}
}
def helmPush(Map args){
container(args.containerName) {
sh "helm package ${args.package} --version ${args.fullVersion} --app-version ${args.fullVersion}+${args.BuildInfo}-${args.Commit}"
sh "curl -u${args.username}:${args.password} -T ${args.package}-${args.fullVersion}.tgz \"${args.helmRepo}/${args.package}-${args.fullVersion}.tgz\""
}
}
return this
从日志来看,它似乎是在运行时
sh "kubectl --kubeconfig ${args.config} config use-context ${args.context}"
它抛出错误
io.fabric8.kubernetes.client.KubernetesClientException: Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy)
但是我应该更改哪些权限或角色?
非常感谢, 尼克
答案 0 :(得分:0)
看看 this section 的官方 kubernetes 文档和 this answer 提供的 Prafull Ladha:
<块引用>上述错误表示您的 apiserver 没有凭据
(kubelet cert and key
) 验证 kubelet 的日志/执行
命令,因此出现 Forbidden
错误消息。
您需要提供 --kubelet-client-certificate=<path_to_cert>
和 --kubelet-client-key=<path_to_key>
到你的 apiserver,这样
apiserver 使用证书和密钥对验证 kubelet。
在 this thread 的 GitHub 上也报告了非常类似的问题,您可以在其中找到以下说明:
<块引用>这意味着 api 服务器还没有被授予用于 在代理 log/exec 请求时对 kubelets 进行身份验证。
参见 apiserver 配置,如中所述 https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#kubelet-authentication