如何删除完整的kubernetes pod?

时间:2019-03-08 23:06:48

标签: bash kubernetes

回答我自己的问题

我在Kubernetes中有一堆Pod,它们已经完成(成功或失败),并且我想清理kubectl get pods的输出。这是我运行kubectl get pods时看到的内容:

NAME                                           READY   STATUS             RESTARTS   AGE
intent-insights-aws-org-73-ingest-391c9384     0/1     ImagePullBackOff   0          8d
intent-postgres-f6dfcddcc-5qwl7                1/1     Running            0          23h
redis-scheduler-dev-master-0                   1/1     Running            0          10h
redis-scheduler-dev-metrics-85b45bbcc7-ch24g   1/1     Running            0          6d
redis-scheduler-dev-slave-74c7cbb557-dmvfg     1/1     Running            0          10h
redis-scheduler-dev-slave-74c7cbb557-jhqwx     1/1     Running            0          5d
scheduler-5f48b845b6-d5p4s                     2/2     Running            0          36m
snapshot-169-5af87b54                          0/1     Completed          0          20m
snapshot-169-8705f77c                          0/1     Completed          0          1h
snapshot-169-be6f4774                          0/1     Completed          0          1h
snapshot-169-ce9a8946                          0/1     Completed          0          1h
snapshot-169-d3099b06                          0/1     ImagePullBackOff   0          24m
snapshot-204-50714c88                          0/1     Completed          0          21m
snapshot-204-7c86df5a                          0/1     Completed          0          1h
snapshot-204-87f35e36                          0/1     ImagePullBackOff   0          26m
snapshot-204-b3a4c292                          0/1     Completed          0          1h
snapshot-204-c3d90db6                          0/1     Completed          0          1h
snapshot-245-3c9a7226                          0/1     ImagePullBackOff   0          28m
snapshot-245-45a907a0                          0/1     Completed          0          21m
snapshot-245-71911b06                          0/1     Completed          0          1h
snapshot-245-a8f5dd5e                          0/1     Completed          0          1h
snapshot-245-b9132236                          0/1     Completed          0          1h
snapshot-76-1e515338                           0/1     Completed          0          22m
snapshot-76-4a7d9a30                           0/1     Completed          0          1h
snapshot-76-9e168c9e                           0/1     Completed          0          1h
snapshot-76-ae510372                           0/1     Completed          0          1h
snapshot-76-f166eb18                           0/1     ImagePullBackOff   0          30m
train-169-65f88cec                             0/1     Error              0          20m
train-169-9c92f72a                             0/1     Error              0          1h
train-169-c935fc84                             0/1     Error              0          1h
train-169-d9593f80                             0/1     Error              0          1h
train-204-70729e42                             0/1     Error              0          20m
train-204-9203be3e                             0/1     Error              0          1h
train-204-d3f2337c                             0/1     Error              0          1h
train-204-e41a3e88                             0/1     Error              0          1h
train-245-7b65d1f2                             0/1     Error              0          19m
train-245-a7510d5a                             0/1     Error              0          1h
train-245-debf763e                             0/1     Error              0          1h
train-245-eec1908e                             0/1     Error              0          1h
train-76-86381784                              0/1     Completed          0          19m
train-76-b1fdc202                              0/1     Error              0          1h
train-76-e972af06                              0/1     Error              0          1h
train-76-f993c8d8                              0/1     Completed          0          1h
webserver-7fc9c69f4d-mnrjj                     2/2     Running            0          36m
worker-6997bf76bd-kvjx4                        2/2     Running            0          25m
worker-6997bf76bd-prxbg                        2/2     Running            0          36m

,我想摆脱像train-204-d3f2337c这样的豆荚。我该怎么办?

5 个答案:

答案 0 :(得分:12)

现在,您可以轻松些。

您可以通过以下方式列出所有已完成的广告连播:

kubectl get pod --field-selector=status.phase==Succeeded

并通过以下方式删除所有已完成的广告连播:

kubectl delete pod --field-selector=status.phase==Succeeded

答案 1 :(得分:1)

这是一个班轮,它将删除所有不在RunningPending状态的吊舱(请注意,如果吊舱名称中包含RunningPending它,它将永远不会被此衬里删除):

kubectl get pods --no-headers=true |grep -v "Running" | grep -v "Pending" | sed -E 's/([a-z0-9-]+).*/\1/g' | xargs kubectl delete pod

这里是一个解释:

  1. 获取没有任何标题的所有广告连播
  2. 过滤掉Running的豆荚
  3. 过滤掉Pending的豆荚
  4. 使用sed正则表达式拉出pod的名称
  5. 使用xargs按名称删除每个吊舱

请注意,这并不能说明所有广告连播状态。例如,如果某个Pod处于ContainerCreating状态,则该衬板也会删除该Pod。

答案 2 :(得分:1)

您可以通过两种方式进行操作。

$ kubectl delete pod $(kubectl get pods | grep Completed | awk '{print $1}')

$ kubectl get pods | grep Completed | awk '{print $1}' | xargs kubectl delete pod

这两种解决方案都能胜任。

答案 3 :(得分:0)

如果此窗格由CronJob创建,则可以使用spec.failedJobsHistoryLimitspec.successfulJobsHistoryLimit

示例:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: my-cron-job
spec:
  schedule: "*/10 * * * *"
  failedJobsHistoryLimit: 1
  successfulJobsHistoryLimit: 3
  jobTemplate:
    spec:
      template:
         ...

答案 4 :(得分:0)

如果您要删除未运行的Pod,可以使用一个命令来完成

kubectl get pods --field-selector=status.phase!=Running