如何从一个集群中获取Kubernetes机密以应用于另一个集群?

时间:2019-08-18 21:11:25

标签: kubernetes google-kubernetes-engine gcloud kubernetes-secrets

对于我的e2e测试,我正在拆分一个单独的群集,要将导入的生产TLS证书导入该群集中。我无法在两个群集之间切换上下文(从一个导出/获取,并导入/应用(导入)到另一个),因为该群集似乎不可见。

我使用GitLab CI和以下.gitlab-ci.yml提取了MVCE,并在其中创建了用于演示目的的机密:

stages:
  - main
  - tear-down

main:
  image: google/cloud-sdk
  stage: main
  script:
    - echo "$GOOGLE_KEY" > key.json
    - gcloud config set project secret-transfer
    - gcloud auth activate-service-account --key-file key.json --project secret-transfer
    - gcloud config set compute/zone us-central1-a
    - gcloud container clusters create secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
    - kubectl create secret generic secret-1 --from-literal=key=value
    - gcloud container clusters create secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
    - gcloud config set container/use_client_certificate True
    - gcloud config set container/cluster secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
    - kubectl get secret letsencrypt-prod --cluster=secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID -o yaml > secret-1.yml
    - gcloud config set container/cluster secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
    - kubectl apply --cluster=secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID -f secret-1.yml

tear-down:
  image: google/cloud-sdk
  stage: tear-down
  when: always
  script:
    - echo "$GOOGLE_KEY" > key.json
    - gcloud config set project secret-transfer
    - gcloud auth activate-service-account --key-file key.json
    - gcloud config set compute/zone us-central1-a
    - gcloud container clusters delete --quiet secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
    - gcloud container clusters delete --quiet secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID

为了避免secret-transfer-[1/2]-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID,我在kubectl语句之前添加了error: no server found for cluster "secret-transfer-1-...-...",但这不会改变结果。

我创建了一个项目secret-transfer,激活了Kubernetes API,并获得了我在环境变量GOOGLE_KEY中提供的Compute Engine服务帐户的JSON密钥。结帐后的输出是

$ echo "$GOOGLE_KEY" > key.json

$ gcloud config set project secret-transfer
Updated property [core/project].

$ gcloud auth activate-service-account --key-file key.json --project secret-transfer
Activated service account credentials for: [131478687181-compute@developer.gserviceaccount.com]

$ gcloud config set compute/zone us-central1-a
Updated property [compute/zone].

$ gcloud container clusters create secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
WARNING: In June 2019, node auto-upgrade will be enabled by default for newly created clusters and node pools. To disable it, use the `--no-enable-autoupgrade` flag.
WARNING: Starting in 1.12, new clusters will have basic authentication disabled by default. Basic authentication can be enabled (or disabled) manually using the `--[no-]enable-basic-auth` flag.
WARNING: Starting in 1.12, new clusters will not have a client certificate issued. You can manually enable (or disable) the issuance of the client certificate using the `--[no-]issue-client-certificate` flag.
WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using `--no-enable-ip-alias` flag. Use `--[no-]enable-ip-alias` flag to suppress this warning.
WARNING: Starting in 1.12, default node pools in new clusters will have their legacy Compute Engine instance metadata endpoints disabled by default. To create a cluster with legacy instance metadata endpoints disabled in the default node pool, run `clusters create` with the flag `--metadata disable-legacy-endpoints=true`.
WARNING: Your Pod address range (`--cluster-ipv4-cidr`) can accommodate at most 1008 node(s). 
This will enable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs.
Creating cluster secret-transfer-1-9b219ea8-9 in us-central1-a...
...done.
Created [https://container.googleapis.com/v1/projects/secret-transfer/zones/us-central1-a/clusters/secret-transfer-1-9b219ea8-9].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-central1-a/secret-transfer-1-9b219ea8-9?project=secret-transfer
kubeconfig entry generated for secret-transfer-1-9b219ea8-9.
NAME                          LOCATION       MASTER_VERSION  MASTER_IP      MACHINE_TYPE  NODE_VERSION   NUM_NODES  STATUS
secret-transfer-1-9b219ea8-9  us-central1-a  1.12.8-gke.10   34.68.118.165  f1-micro      1.12.8-gke.10  3          RUNNING

$ kubectl create secret generic secret-1 --from-literal=key=value
secret/secret-1 created

$ gcloud container clusters create secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
WARNING: In June 2019, node auto-upgrade will be enabled by default for newly created clusters and node pools. To disable it, use the `--no-enable-autoupgrade` flag.
WARNING: Starting in 1.12, new clusters will have basic authentication disabled by default. Basic authentication can be enabled (or disabled) manually using the `--[no-]enable-basic-auth` flag.
WARNING: Starting in 1.12, new clusters will not have a client certificate issued. You can manually enable (or disable) the issuance of the client certificate using the `--[no-]issue-client-certificate` flag.
WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using `--no-enable-ip-alias` flag. Use `--[no-]enable-ip-alias` flag to suppress this warning.
WARNING: Starting in 1.12, default node pools in new clusters will have their legacy Compute Engine instance metadata endpoints disabled by default. To create a cluster with legacy instance metadata endpoints disabled in the default node pool, run `clusters create` with the flag `--metadata disable-legacy-endpoints=true`.
WARNING: Your Pod address range (`--cluster-ipv4-cidr`) can accommodate at most 1008 node(s). 
This will enable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs.
Creating cluster secret-transfer-2-9b219ea8-9 in us-central1-a...
...done.
Created [https://container.googleapis.com/v1/projects/secret-transfer/zones/us-central1-a/clusters/secret-transfer-2-9b219ea8-9].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-central1-a/secret-transfer-2-9b219ea8-9?project=secret-transfer
kubeconfig entry generated for secret-transfer-2-9b219ea8-9.
NAME                          LOCATION       MASTER_VERSION  MASTER_IP      MACHINE_TYPE  NODE_VERSION   NUM_NODES  STATUS
secret-transfer-2-9b219ea8-9  us-central1-a  1.12.8-gke.10   104.198.37.21  f1-micro      1.12.8-gke.10  3          RUNNING

$ gcloud config set container/use_client_certificate True
Updated property [container/use_client_certificate].

$ gcloud config set container/cluster secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
Updated property [container/cluster].

$ kubectl get secret secret-1 --cluster=secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID -o yaml > secret-1.yml
error: no server found for cluster "secret-transfer-1-9b219ea8-9"

我期望kubectl get secret可以正常工作,因为两个集群都存在,并且--cluster参数指向正确的集群。

2 个答案:

答案 0 :(得分:3)

通常,gcloud命令用于管理gcloud资源并处理如何通过gcloud进行身份验证,而kubectl命令会影响您与Kubernetes集群的交互方式,无论是否它们恰好在GCP上运行和/或在GKE中创建。因此,我会避免这样做:

$ gcloud config set container/use_client_certificate True
Updated property [container/use_client_certificate].

$ gcloud config set container/cluster \
  secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
Updated property [container/cluster].

它没有做您可能想做的事情(即,更改有关kubectl目标群集的任何方式),并且可能会影响将来的gcloud命令的工作方式。

gcloudkubectl分开的另一个后果是,尤其是kubectl对您的gcloud设置不太了解,是{{1}中的群集名称}透视图与gcloud透视图不同。当您执行诸如kubectl之类的操作时,gcloud config set compute/zone对此一无所知,因此它必须能够唯一地标识名称相同但位于不同项目和区域中的群集,并且可能甚至在GKE(例如minikube或其他一些云提供商)中也是如此。这就是kubectl无法正常工作的原因,也是为什么您看到错误消息的原因:

kubectl --cluster=<gke-cluster-name> <some_command>

@coderanger pointed out一样,error: no server found for cluster "secret-transfer-1-9b219ea8-9" 之后在~/.kube/config文件中生成的群集名称具有更复杂的名称,该名称目前具有类似{ {1}}。

因此,您可以使用gcloud container clusters create ...(或gke_[project]_[region]_[name]来运行命令,这会更加惯用,尽管在这种情况下,由于两个群集都使用相同的服务帐户,因此两者都可以使用),但是,这需要了解kubectl --cluster gke_[project]_[region]_[name] ...如何为上下文和群集名称生成这些字符串。

另一种选择是做类似的事情:

kubectl --context [project]_[region]_[name] ...

通过拥有您自己控制的gcloud个文件,您不必猜测任何字符串。在创建集群时设置$ KUBECONFIG=~/.kube/config1 gcloud container clusters create \ secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID \ --project secret-transfer --machine-type=f1-micro $ KUBECONFIG=~/.kube/config1 kubectl create secret secret-1 --from-literal=key=value $ KUBECONFIG=~/.kube/config2 gcloud container clusters create \ secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID \ --project secret-transfer --machine-type=f1-micro $ KUBECONFIG=~/.kube/config1 kubectl get secret secret-1 -o yaml > secret-1.yml $ KUBECONFIG=~/.kube/config2 kubectl apply -f secret-1.yml 变量将导致创建该文件,并且KUBECONFIGKUBECONFIG的凭据放入该文件中。在运行gcloud命令时设置kubectl环境变量将确保KUBECONFIG使用该特定文件中设置的上下文。

答案 1 :(得分:0)

您可能打算使用--context而不是--cluster。上下文设置集群和正在使用的用户。此外,由GKE创建的上下文和群集(和用户)名称不仅是群集标识符,它是gke_[project]_[region]_[name]