如何将ConfigMap分配给已经运行的Pod?

时间:2019-08-17 19:53:44

标签: docker kubernetes google-kubernetes-engine kubernetes-pod configmap

我无法将ConfigMap加载到当前正在运行nginx的pod中。

我尝试通过创建一个简单的pod定义,并向其中添加一个简单的ConfigMap如下所示:

    String dbUrl = "jdbc:oracle://localhost:1521/orcl"; 


    //Database Username     
    String username = "abc";    

    //Database Password     
    String password = "abc";                

    //Query to Execute      
    String query = "select *  from jobs;";

    //Load mysql jdbc driver        
    Class.forName("oracle.jdbc.driver.OracleDriver");       

    //Create Connection to DB       
    Connection con = DriverManager.getConnection(dbUrl,username,password);

    //Create Statement Object       
   Statement stmt = con.createStatement();                  

        // Execute the SQL Query. Store results in ResultSet        
    ResultSet rs= stmt.executeQuery(query); 

此文件成功运行,其YAML文件已保存,然后删除。

这就是我得到的:

apiVersion: v1
kind: Pod
metadata:
  name: testpod
spec:
  containers:
  - name: testcontainer
    image: nginx
    env:
    - name: MY_VAR
      valueFrom:
        configMapKeyRef:
          name: configmap1
          key: data1

然后我尝试将其语法复制到另一个正在运行的pod中。

这就是我使用apiVersion: v1 kind: Pod metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"testpod","namespace":"default"},"spec":{"containers":[{"env":[{"name":"MY_VAR","valueFrom":{"configMapKeyRef":{"key":"data1","name":"configmap1"}}}],"image":"nginx","name":"testcontainer"}]}} creationTimestamp: null name: testpod selfLink: /api/v1/namespaces/default/pods/testpod spec: containers: - env: - name: MY_VAR valueFrom: configMapKeyRef: key: data1 name: configmap1 image: nginx imagePullPolicy: Always name: testcontainer resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-27x4x readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: ip-10-0-1-103 priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: default-token-27x4x secret: defaultMode: 420 secretName: default-token-27x4x status: phase: Pending qosClass: BestEffort 的结果吗?

kubectl edit pod po

还有apiVersion: v1 kind: Pod metadata: creationTimestamp: "2019-08-17T18:15:22Z" labels: run: pod1 name: pod1 namespace: default resourceVersion: "12167" selfLink: /api/v1/namespaces/default/pods/pod1 uid: fa297c13-c11a-11e9-9a5f-02ca4f0dcea0 spec: containers: - image: nginx imagePullPolicy: Always name: pod1 resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-27x4x readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: ip-10-0-1-102 priority: 0 restartPolicy: Never schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: default-token-27x4x secret: defaultMode: 420 secretName: default-token-27x4x status: conditions: - lastProbeTime: null lastTransitionTime: "2019-08-17T18:15:22Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2019-08-17T18:15:27Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2019-08-17T18:15:27Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2019-08-17T18:15:22Z" status: "True" type: PodScheduled containerStatuses: - containerID: docker://99bfded0d69f4ed5ed854e59b458acd8a9197f9bef6d662a03587fe2ff61b128 image: nginx:latest imageID: docker-pullable://nginx@sha256:53ddb41e46de3d63376579acf46f9a41a8d7de33645db47a486de9769201fec9 lastState: {} name: pod1 ready: true restartCount: 0 state: running: startedAt: "2019-08-17T18:15:27Z" hostIP: 10.0.1.102 phase: Running podIP: 10.244.2.2 qosClass: BestEffort startTime: "2019-08-17T18:15:22Z"

k get po pod1 -o yaml --export

我在做错什么还是错过了什么?

1 个答案:

答案 0 :(得分:1)

您不能将配置添加到正在运行的pod中,这是容器固有的。

简单地说:容器正在与服务一起运行,服务的状态定义了容器的状态。如您所知,如果您更改了nginx的配置,则需要重新加载它的配置,但这并不是一个好主意,因此您需要使用新配置来停止/启动容器。

因此,您得到的是正常的,服务状态仍在运行,因此即使您在文件内部进行更改,它仍保持以前的旧文件配置。

如果您需要在不停机的情况下重新加载服务,请设置多个副本并创建滚动更新规则,以确保更新期间无停机。

这有一些特殊情况,例如grafana,它可以检查文件自上次修改以来是否已更改。