必须重新应用相同的Terraform配置,以符合原始GKE配置的网络策略

时间:2020-01-05 14:49:10

标签: terraform terraform-provider-gcp

我使用terraform在配置文件中创建了一个新的GKE:

    network_policy_config {
      disabled = false
    }

在创建了新的GKE群集之后,terraform show的输出为:

network_policy_config {
            disabled = true
        }

我再次运行terraform apply,这次它应用了我最初配置的内容。 这是我应用更改前后的kubectl -n kube-system get pod的输出:

之前

$ kubectl -n kube-system get pod
NAME                                                             READY   STATUS    RESTARTS   AGE
event-exporter-v0.3.0-74bf544f8b-8rn7g                           2/2     Running   0          12m
fluentd-gcp-scaler-dd489f778-4zk7j                               1/1     Running   0          12m
fluentd-gcp-v3.1.1-8grpn                                         2/2     Running   6          8m17s
fluentd-gcp-v3.1.1-tlnf2                                         2/2     Running   6          8m21s
heapster-55cfc57479-d2cqb                                        3/3     Running   0          102s
kube-dns-7557678d7d-l62ct                                        4/4     Running   8          8m25s
kube-dns-7557678d7d-vghhz                                        4/4     Running   8          12m
kube-dns-autoscaler-6d7c4b8447-fwlhz                             1/1     Running   0          12m
kube-proxy-gke-center-anhcq151--terraform-202001-4ad6c87c-trvw   1/1     Running   0          8m37s
kube-proxy-gke-center-anhcq151--terraform-202001-4ad6c87c-xp4b   1/1     Running   0          8m37s
l7-default-backend-84c9fcfbb-77gj2                               1/1     Running   0          12m
metrics-server-v0.3.3-85dfcbb78-flf6c                            2/2     Running   4          12m
prometheus-to-sd-m6sx9                                           2/2     Running   0          8m36s
prometheus-to-sd-tlf6q                                           2/2     Running   0          8m37s
stackdriver-metadata-agent-cluster-level-647b8665c4-wkfpq        1/1     Running   6          12m

之后

$ kubectl -n kube-system get pod
NAME                                                             READY   STATUS    RESTARTS   AGE
calico-node-vertical-autoscaler-66f789fc5d-kvpj8                 1/1     Running   0          82s
calico-typha-6575d9b47d-82m9p                                    1/1     Running   0          78s
calico-typha-horizontal-autoscaler-69f66cbb58-mwsrj              1/1     Running   0          82s
calico-typha-vertical-autoscaler-6768b87f5c-rml8l                1/1     Running   0          82s
event-exporter-v0.3.0-74bf544f8b-8rn7g                           2/2     Running   0          20m
fluentd-gcp-scaler-dd489f778-4zk7j                               1/1     Running   0          20m
fluentd-gcp-v3.1.1-8grpn                                         2/2     Running   7          17m
fluentd-gcp-v3.1.1-tlnf2                                         2/2     Running   7          17m
heapster-67d9d66845-g7k66                                        3/3     Running   0          80s
kube-dns-7557678d7d-l62ct                                        4/4     Running   8          17m
kube-dns-7557678d7d-vghhz                                        4/4     Running   8          20m
kube-dns-autoscaler-6d7c4b8447-fwlhz                             1/1     Running   0          20m
kube-proxy-gke-center-anhcq151--terraform-202001-4ad6c87c-trvw   1/1     Running   0          17m
kube-proxy-gke-center-anhcq151--terraform-202001-4ad6c87c-xp4b   1/1     Running   0          17m
l7-default-backend-84c9fcfbb-77gj2                               1/1     Running   0          20m
metrics-server-v0.3.3-85dfcbb78-flf6c                            2/2     Running   4          20m
prometheus-to-sd-m6sx9                                           2/2     Running   0          17m
prometheus-to-sd-tlf6q                                           2/2     Running   0          17m
stackdriver-metadata-agent-cluster-level-647b8665c4-wkfpq        1/1     Running   6          20m

任何人都可以帮忙解释一下terraform做了什么,以及我如何只能在一次申请中完成它?

谢谢!

1 个答案:

答案 0 :(得分:0)

我仍然不知道为什么应用相同的配置会导致这种情况。 我想出了一个简单的方法,只需将此块添加到google_container_cluster资源中即可:

network_policy {
    provider = "CALICO"
    enabled  = true
  }