StatefulSet / kfserving-controller-manager:后退重新启动失败的容器

时间:2019-09-24 23:40:12

标签: kubernetes kubeflow microk8s

我想在prem上安装kubeflow(我有32GB RAM的Ubuntu 19.04)。为此,以下是规格:

img

为了部署kubeflow,我发现了几个yaml文件:

  • https://raw.githubusercontent.com/kubeflow/kubeflow/v0.6-branch/bootstrap/config/kfctl_k8s_istio.0.6.2.yaml:此yaml文件无法启动所有Pod。

  • https://raw.githubusercontent.com/kubeflow/kubeflow/master/bootstrap/config/kfctl_k8s_istio.yaml:这个Yaml文件能够启动除StatefulSet ## microk8s # Install snap install microk8s --classic --stable # version microk8s.kubectl version Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:50Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} ## kfctl kfctl version # kfctl v0.6.2-0-g47a0e4c7 ## kustomize kustomize version Version: {KustomizeVersion:3.2.0 GitCommit:a3103f1e62ddb5b696daa3fd359bb6f2e8333b49 BuildDate:2019-09-18T16:26:36Z GoOs:linux GoArch:amd64} 之外的所有内容,如下所示:

kfserving-controller-manager
microk8s.kubectl -n kubeflow get statefulsets

#NAME                                       READY   AGE
#admission-webhook-bootstrap-stateful-set   1/1     127m
#application-controller-stateful-set        1/1     127m
#kfserving-controller-manager               0/1     126m
#metacontroller                             1/1     127m
#seldon-operator-controller-manager         1/1     126m
microk8s.kubectl -n kubeflow describe statefulsets/kfserving-controller-manager
Name:               kfserving-controller-manager
Namespace:          kubeflow
CreationTimestamp:  Tue, 24 Sep 2019 03:54:37 +0400
Selector:           control-plane=kfserving-controller-manager,controller-tools.k8s.io=1.0,kustomize.component=kfserving
Labels:             control-plane=kfserving-controller-manager
                    controller-tools.k8s.io=1.0
                    kustomize.component=kfserving
Annotations:        kubectl.kubernetes.io/last-applied-configuration:
                      {"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"labels":{"control-plane":"kfserving-controller-manager","contro...
Replicas:           1 desired | 1 total
Update Strategy:    RollingUpdate
  Partition:        824638326680
Pods Status:        1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  control-plane=kfserving-controller-manager
           controller-tools.k8s.io=1.0
           kustomize.component=kfserving
  Containers:
   kube-rbac-proxy:
    Image:      gcr.io/kubebuilder/kube-rbac-proxy:v0.4.0
    Port:       8443/TCP
    Host Port:  0/TCP
    Args:
      --secure-listen-address=0.0.0.0:8443
      --upstream=http://127.0.0.1:8080/
      --logtostderr=true
      --v=10
    Environment:  <none>
    Mounts:       <none>
   manager:
    Image:      gcr.io/kfserving/kfserving-controller:v0.1.1
    Port:       9876/TCP
    Host Port:  0/TCP
    Command:
      /manager
    Args:
      --metrics-addr=127.0.0.1:8080
    Limits:
      cpu:     100m
      memory:  300Mi
    Requests:
      cpu:     100m
      memory:  200Mi
    Environment:
      POD_NAMESPACE:   (v1:metadata.namespace)
      SECRET_NAME:    kfserving-webhook-server-secret
    Mounts:
      /tmp/cert from cert (ro)
  Volumes:
   cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kfserving-webhook-server-secret
    Optional:    false
Volume Claims:   <none>
Events:          <none>

因此,建议使此有状态集正确运行?还是对要在Prem上安装KubeFlow的版本有任何建议?

谢谢

0 个答案:

没有答案