helm init错误:安装错误:在gitlab Runner内运行时,禁止deployments.extensions

时间:2019-03-26 16:44:31

标签: kubernetes gitlab-ci-runner kubernetes-helm

我已将Gitlab(11.8.1)(自托管)连接到自托管的K8s群集(1.13.4)。 gitlab名称为shipmentauthentication_serviceshipment_mobile_service的项目有3个。

所有项目都添加相同的K8s配置例外项目名称空间。

在Gitlab UI中安装Helm Tiller和Gitlab Runner时,第一个项目成功。

第二个和第三个项目仅成功安装了Helm Tiller,Gitlab Runner出现错误,并登录了安装Runner Pod:

 Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Error: cannot connect to Tiller
+ sleep 1s
+ echo 'Retrying (30)...'
+ helm repo add runner https://charts.gitlab.io
Retrying (30)...
"runner" has been added to your repositories
+ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "runner" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈ 
+ helm upgrade runner runner/gitlab-runner --install --reset-values --tls --tls-ca-cert /data/helm/runner/config/ca.pem --tls-cert /data/helm/runner/config/cert.pem --tls-key /data/helm/runner/config/key.pem --version 0.2.0 --set 'rbac.create=true,rbac.enabled=true' --namespace gitlab-managed-apps -f /data/helm/runner/config/values.yaml
Error: UPGRADE FAILED: remote error: tls: bad certificate 

我没有在第一个项目上用K8s集群配置gitlab-ci,仅在第二个和第三个项目上进行了设置。奇怪的是使用相同的helm-data(只是名称不同),第二次运行成功,但第三次没有。

由于(从第一个项目开始)只有一个gitlab运行器可用,因此我将第二个和第三个项目都分配给该运行器。

我将这gitlab-ci.yml用于在helm upgrade命令中只有两个不同名称的两个项目。

stages:
  - test
  - build
  - deploy

variables:
  CONTAINER_IMAGE: dockerhub.linhnh.vn/${CI_PROJECT_PATH}:${CI_PIPELINE_ID}
  CONTAINER_IMAGE_LATEST: dockerhub.linhnh.vn/${CI_PROJECT_PATH}:latest
  CI_REGISTRY: dockerhub.linhnh.vn
  DOCKER_DRIVER: overlay2
  DOCKER_HOST: tcp://localhost:2375 # required when use dind

# test phase and build phase using docker:dind success

deploy_beta:
  stage: deploy
  image: alpine/helm
  script:
    - echo "Deploy test start ..."
    - helm init --upgrade
    - helm upgrade --install --force shipment-mobile-service --recreate-pods --set image.tag=${CI_PIPELINE_ID} ./helm-data
    - echo "Deploy test completed!"
  environment:
    name: staging
  tags: ["kubernetes_beta"]
  only:
  - master

掌舵数据非常简单,因此我认为并不需要在此处粘贴。 这是第二个项目部署成功时的日志:

Running with gitlab-runner 11.7.0 (8bb608ff)
  on runner-gitlab-runner-6c8555c86b-gjt9f XrmajZY2
Using Kubernetes namespace: gitlab-managed-apps
Using Kubernetes executor with image linkyard/docker-helm ...
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-15-concurrent-0x2bms to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-15-concurrent-0x2bms to be running, status is Pending
Running on runner-xrmajzy2-project-15-concurrent-0x2bms via runner-gitlab-runner-6c8555c86b-gjt9f...
Cloning into '/root/authentication_service'...
Cloning repository...
Checking out 5068bf1f as master...
Skipping Git submodules setup
$ echo "Deploy start ...."
Deploy start ....
$ helm init --upgrade --dry-run --debug
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.13.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
status: {}

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  ports:
  - name: tiller
    port: 44134
    targetPort: tiller
  selector:
    app: helm
    name: tiller
  type: ClusterIP
status:
  loadBalancer: {}

...
$ helm upgrade --install --force authentication-service --recreate-pods --set image.tag=${CI_PIPELINE_ID} ./helm-data
WARNING: Namespace "gitlab-managed-apps" doesn't match with previous. Release will be deployed to default
Release "authentication-service" has been upgraded. Happy Helming!
LAST DEPLOYED: Tue Mar 26 05:27:51 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Deployment
NAME                    READY  UP-TO-DATE  AVAILABLE  AGE
authentication-service  1/1    1           1          17d

==> v1/Pod(related)
NAME                                    READY  STATUS       RESTARTS  AGE
authentication-service-966c997c4-mglrb  0/1    Pending      0         0s
authentication-service-966c997c4-wzrkj  1/1    Terminating  0         49m

==> v1/Service
NAME                    TYPE      CLUSTER-IP     EXTERNAL-IP  PORT(S)       AGE
authentication-service  NodePort  10.108.64.133  <none>       80:31340/TCP  17d


NOTES:
1. Get the application URL by running these commands:
  export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services authentication-service)
  echo http://$NODE_IP:$NODE_PORT
$ echo "Deploy completed"
Deploy completed
Job succeeded

第三个项目失败:

Running with gitlab-runner 11.7.0 (8bb608ff)
  on runner-gitlab-runner-6c8555c86b-gjt9f XrmajZY2
Using Kubernetes namespace: gitlab-managed-apps
Using Kubernetes executor with image alpine/helm ...
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-18-concurrent-0bv4bx to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-18-concurrent-0bv4bx to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-18-concurrent-0bv4bx to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-18-concurrent-0bv4bx to be running, status is Pending
Running on runner-xrmajzy2-project-18-concurrent-0bv4bx via runner-gitlab-runner-6c8555c86b-gjt9f...
Cloning repository...
Cloning into '/canhnv5/shipmentmobile'...
Checking out 278cbd3d as master...
Skipping Git submodules setup
$ echo "Deploy test start ..."
Deploy test start ...
$ helm init --upgrade
Creating /root/.helm 
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/cache/archive 
Creating /root/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /root/.helm.
Error: error installing: deployments.extensions is forbidden: User "system:serviceaccount:shipment-mobile-service:shipment-mobile-service-service-account" cannot create resource "deployments" in API group "extensions" in the namespace "kube-system"
ERROR: Job failed: command terminated with exit code 1

我可以看到他们使用的是我在第一个项目中安装的同一运行器XrmajZY2,即相同的k8s名称空间gitlab-managed-apps

我认为他们使用特权模式,但不知道为什么第二个可以获得正确的许可,而第三个却不能?我应该创建用户system:serviceaccount:shipment-mobile-service:shipment-mobile-service-service-account并分配给cluster-admin吗?

感谢@cookiedough的指令。我执行以下步骤:

  • canhv5/shipment-mobile-service分叉到我的根帐户root/shipment-mobile-service中。

  • 删除gitlab-managed-apps命名空间,不包含任何内容,运行kubectl delete -f gitlab-admin-service-account.yaml

  • 应用此文件,然后按@cookiedough指南获取令牌。

  • 回到Gitlab中的root/shipment-mobile-service,删除先前的集群。重新添加具有新令牌的群集。在Gitlab用户界面中安装Helm Tiller,然后安装Gitlab Runner。

  • 重新运行作业,然后魔术发生了。但是我仍然不清楚为什么canhv5/shipment-mobile-service仍然会遇到相同的错误。

1 个答案:

答案 0 :(得分:2)

在执行以下操作之前,请删除gitlab-managed-apps名称空间:

kubectl delete namespace gitlab-managed-apps

GitLab tutorial进行引用,您将需要创建一个serviceaccount,并且clusterrolebinding得到了GitLab,并且需要使用创建的密钥将您的项目作为一个连接到集群。结果。

  

创建一个名为gitlab-admin-service-account.yaml的文件,内容如下:

 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: gitlab-admin
   namespace: kube-system
 ---
 apiVersion: rbac.authorization.k8s.io/v1beta1
 kind: ClusterRoleBinding
 metadata:
   name: gitlab-admin
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: cluster-admin
 subjects:
 - kind: ServiceAccount
   name: gitlab-admin
   namespace: kube-system
  

将服务帐户和集群角色绑定应用于您的集群:

kubectl apply -f gitlab-admin-service-account.yaml
  

输出:

 serviceaccount "gitlab-admin" created
 clusterrolebinding "gitlab-admin" created
  

获取gitlab-admin服务帐户的令牌:

 kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep gitlab-admin | awk '{print $1}')

从输出中复制<authentication_token>值:

Name:         gitlab-admin-token-b5zv4
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=gitlab-admin
              kubernetes.io/service-account.uid=bcfe66ac-39be-11e8-97e8-026dce96b6e8

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      <authentication_token>

请按照本教程将集群连接到项目,否则一路上您将不得不缝同样的东西!