Automate Retrieval and Storing Kubeconfig File After Creating a Cluster with Terraform/GKE

时间:2019-04-08 13:30:14

标签: kubernetes terraform google-kubernetes-engine

When I use Terraform to create a cluster in GKE everything works fine and as expected.

After the cluster is created, I want to then use Terraform to deploy a workload.

My issue is, how to be able to point at the correct cluster, but I'm not sure I understand the best way of achieving this.

I want to automate the retrieval of the clusters kubeconfig file- the file which is generally stored at ~/.kube/config. This file is updated when users run this command manually to authenticate to the correct cluster.

I am aware if this file is stored on the host machine (the one I have Terraform running on) that it's possible to point at this file to authenticate to the cluster like so:

provider kubernetes {
  # leave blank to pickup config from kubectl config of local system
  config_path = "~/.kube/config"
}

However, running this command to generate the kubeconfig requires Cloud SDK to be installed on the same machine that Terraform is running on, and its manual execution doesn't exactly seem very elegant.

I am sure I must be missing something in how to achieve this.

Is there a better way to retrieve the kubeconfig file via Terraform from a cluster created by Terraform?

2 个答案:

答案 0 :(得分:0)

基本上,只需一步即可创建集群。例如,将kube配置文件导出到S3。

在另一步骤中,检索文件并移至默认文件夹。 Terraform应该按照以下步骤工作。然后,您可以将对象应用于先前创建的集群。

我正在使用gitlabCi管道进行部署,我为k8s集群(下)提供了一个存储库代码,为k8s对象提供了另一个存储库代码。第一个管道触发第二个管道。

答案 1 :(得分:0)

实际上,还有另一种访问新创建的gke的方法。

Object.values