Kubernetes复制控制器处于CrashLoopBackOff状态

时间:2016-04-17 23:12:55

标签: amazon-web-services docker kubernetes kubernetes-health-check

我执行了以下步骤。

使用以下配置文件创建复制控制器:

{
   "kind":"ReplicationController",
   "apiVersion":"v1",
   "metadata":{
      "name":"fsharp-service",
      "labels":{
         "app":"fsharp-service"
      }
   },
   "spec":{
      "replicas":1,
      "selector":{
         "app":"fsharp-service"
      },
      "template":{
         "metadata":{
            "labels":{
               "app":"fsharp-service"
            }
         },
         "spec":{
            "containers":[
               {
                  "name":"fsharp-service",
                  "image":"fsharp/fsharp:latest",
                  "ports":[
                     {
                        "name":"http-server",
                        "containerPort":3000
                     }
                  ]
               }
            ]
         }
      }
   }
}

运行命令:

kubectl create -f fsharp-controller.json

这是输出:

$ kubectl get rc
CONTROLLER       CONTAINER(S)     IMAGE(S)                             SELECTOR             REPLICAS
cassandra        cassandra        gcr.io/google-samples/cassandra:v8   app=cassandra        3
fsharp-service   fsharp-service   fsharp/fsharp:latest                 app=fsharp-service   1
$ kubectl get pods
NAME                   READY     REASON    RESTARTS   AGE
cassandra              1/1       Running   0          28m
cassandra-ch1br        1/1       Running   0          28m
cassandra-xog49        1/1       Running   0          27m
fsharp-service-7lrq8   0/1       Error     2          31s
$ kubectl logs fsharp-service-7lrq8

F# Interactive for F# 4.0 (Open Source Edition)
Freely distributed under the Apache 2.0 Open Source License

For help type #help;;

$ kubectl get pods
NAME                   READY     REASON             RESTARTS   AGE
cassandra              1/1       Running            0          28m
cassandra-ch1br        1/1       Running            0          28m
cassandra-xog49        1/1       Running            0          28m
fsharp-service-7lrq8   0/1       CrashLoopBackOff   3          1m
$ kubectl describe po fsharp-service-7lrq8
W0417 15:52:36.288492   11461 request.go:302] field selector: v1 - events - involvedObject.name - fsharp-service-7lrq8: need to check if this is versioned correctly.
W0417 15:52:36.289196   11461 request.go:302] field selector: v1 - events - involvedObject.namespace - default: need to check if this is versioned correctly.
W0417 15:52:36.289204   11461 request.go:302] field selector: v1 - events - involvedObject.uid - d4dab099-04ee-11e6-b7f9-0a11c670939b: need to check if this is versioned correctly.
Name:               fsharp-service-7lrq8
Image(s):           fsharp/fsharp:latest
Node:               ip-172-20-0-228.us-west-2.compute.internal/172.20.0.228
Labels:             app=fsharp-service
Status:             Running
Replication Controllers:    fsharp-service (1/1 replicas created)
Containers:
  fsharp-service:
    Image:      fsharp/fsharp:latest
    State:      Waiting
      Reason:       CrashLoopBackOff
    Ready:      False
    Restart Count:  3
Conditions:
  Type      Status
  Ready     False
Events:
  FirstSeen             LastSeen            Count   From                            SubobjectPath           Reason      Message
  Sun, 17 Apr 2016 15:50:50 -0700   Sun, 17 Apr 2016 15:50:50 -0700 1   {default-scheduler }                                    Scheduled   Successfully assigned fsharp-service-7lrq8 to ip-172-20-0-228.us-west-2.compute.internal
  Sun, 17 Apr 2016 15:50:51 -0700   Sun, 17 Apr 2016 15:50:51 -0700 1   {kubelet ip-172-20-0-228.us-west-2.compute.internal}    spec.containers{fsharp-service} Created     Created container with docker id d44c288ea67b
  Sun, 17 Apr 2016 15:50:51 -0700   Sun, 17 Apr 2016 15:50:51 -0700 1   {kubelet ip-172-20-0-228.us-west-2.compute.internal}    spec.containers{fsharp-service} Started     Started container with docker id d44c288ea67b
  Sun, 17 Apr 2016 15:50:55 -0700   Sun, 17 Apr 2016 15:50:55 -0700 1   {kubelet ip-172-20-0-228.us-west-2.compute.internal}    spec.containers{fsharp-service} Started     Started container with docker id 688a3ed122d2
  Sun, 17 Apr 2016 15:50:55 -0700   Sun, 17 Apr 2016 15:50:55 -0700 1   {kubelet ip-172-20-0-228.us-west-2.compute.internal}    spec.containers{fsharp-service} Created     Created container with docker id 688a3ed122d2
  Sun, 17 Apr 2016 15:50:58 -0700   Sun, 17 Apr 2016 15:50:58 -0700 1   {kubelet ip-172-20-0-228.us-west-2.compute.internal}                    FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "fsharp-service" with CrashLoopBackOff: "Back-off 10s restarting failed container=fsharp-service pod=fsharp-service-7lrq8_default(d4dab099-04ee-11e6-b7f9-0a11c670939b)"

  Sun, 17 Apr 2016 15:51:15 -0700   Sun, 17 Apr 2016 15:51:15 -0700 1   {kubelet ip-172-20-0-228.us-west-2.compute.internal}    spec.containers{fsharp-service} Started     Started container with docker id c2e348e1722d
  Sun, 17 Apr 2016 15:51:15 -0700   Sun, 17 Apr 2016 15:51:15 -0700 1   {kubelet ip-172-20-0-228.us-west-2.compute.internal}    spec.containers{fsharp-service} Created     Created container with docker id c2e348e1722d
  Sun, 17 Apr 2016 15:51:17 -0700   Sun, 17 Apr 2016 15:51:31 -0700 2   {kubelet ip-172-20-0-228.us-west-2.compute.internal}                    FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "fsharp-service" with CrashLoopBackOff: "Back-off 20s restarting failed container=fsharp-service pod=fsharp-service-7lrq8_default(d4dab099-04ee-11e6-b7f9-0a11c670939b)"

  Sun, 17 Apr 2016 15:50:50 -0700   Sun, 17 Apr 2016 15:51:44 -0700 4   {kubelet ip-172-20-0-228.us-west-2.compute.internal}    spec.containers{fsharp-service} Pulling     pulling image "fsharp/fsharp:latest"
  Sun, 17 Apr 2016 15:51:45 -0700   Sun, 17 Apr 2016 15:51:45 -0700 1   {kubelet ip-172-20-0-228.us-west-2.compute.internal}    spec.containers{fsharp-service} Created     Created container with docker id edaea97fb379
  Sun, 17 Apr 2016 15:50:51 -0700   Sun, 17 Apr 2016 15:51:45 -0700 4   {kubelet ip-172-20-0-228.us-west-2.compute.internal}    spec.containers{fsharp-service} Pulled      Successfully pulled image "fsharp/fsharp:latest"
  Sun, 17 Apr 2016 15:51:46 -0700   Sun, 17 Apr 2016 15:51:46 -0700 1   {kubelet ip-172-20-0-228.us-west-2.compute.internal}    spec.containers{fsharp-service} Started     Started container with docker id edaea97fb379
  Sun, 17 Apr 2016 15:50:58 -0700   Sun, 17 Apr 2016 15:52:27 -0700 7   {kubelet ip-172-20-0-228.us-west-2.compute.internal}    spec.containers{fsharp-service} BackOff     Back-off restarting failed docker container
  Sun, 17 Apr 2016 15:51:48 -0700   Sun, 17 Apr 2016 15:52:27 -0700 4   {kubelet ip-172-20-0-228.us-west-2.compute.internal}                    FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "fsharp-service" with CrashLoopBackOff: "Back-off 40s restarting failed container=fsharp-service pod=fsharp-service-7lrq8_default(d4dab099-04ee-11e6-b7f9-0a11c670939b)"

有什么问题?

如何找出控制器无法正常启动的原因?

更新。

我试图改变简单的" fsharp / fsharp:latest"将图像转换为另一个图像,其中有一个服务正在侦听端口,这就是我想要使用容器的方式。

图像被称为"用户名/ someservice:mytag"并有一个服务听取端口3000.

我将服务运行为:

mono Service.exe

当我查看日志时,我看到了这一点:

$ kubectl logs -p fsharp-service-wjmpv
Running on http://127.0.0.1:3000
Press enter to exit

因此即使进程不应退出,容器也处于相同的状态:

$ kubectl get pods
NAME                   READY     REASON             RESTARTS   AGE
fsharp-service-wjmpv   0/1       CrashLoopBackOff   9          25m

我还尝试使用-i标志从我的图像运行容器,以使容器不退出,但kubectl似乎无法识别-i标志:\

有什么想法吗?

3 个答案:

答案 0 :(得分:3)

您正在启动一个立即退出的容器。 kubelet注意到,重新启动它,然后它再次退出。在这种情况发生几次之后,kubelet会降低它尝试启动容器的速度(这是CrashLoopBackOff状态)。

fsharp documentation表示使用docker run fsharp/fsharp:latest 标志运行容器,这会给出一个交互式提示。如果你这样做

use Win32::OLE::Const 'Microsoft Excel';
use Spreadsheet::Read;
use Spreadsheet::ParseExcel;
my $parser   = Spreadsheet::ParseExcel->new();
my $workbook = $parser->parse('abc.xls');
my $sheet_got;
for my $worksheet ( $workbook->worksheets() ) 
{
    my ( $row_min, $row_max ) = $worksheet->row_range();
    my ( $col_min, $col_max ) = $worksheet->col_range();
    for my $row ( $row_min .. $row_max ) 
    {
        for my $col ( $col_min .. $col_max ) 
        {
           my $cell = $worksheet->get_cell( $row, $col );
           next unless $cell;
           $cell = $cell->value();
           $sheet_got .= "$cell ";               
           }           
         }
     }

您会注意到容器会立即退出并将您转储回本地shell。这是您尝试在群集中调用容器的方式,它同样会立即退出。

答案 1 :(得分:3)

我会使用kubectl logs来尝试找出容器中发生的事情,如下所示:

kubectl logs -p fsharp-service-7lrq8

-p标志允许您获取上一次启动的日志,这是因为容器崩溃所必需的。

更多信息:http://kubernetes.io/docs/user-guide/kubectl/kubectl_logs/

答案 2 :(得分:0)

我已将以下行添加到我的F#服务(Unix特定代码)中,以确保进程不会退出:

let signals = [| new UnixSignal (Signum.SIGINT); 
                 new UnixSignal (Signum.SIGTERM); 
                 new UnixSignal (Signum.SIGQUIT)
              |]

let which = UnixSignal.WaitAny (signals, -1);

之后我的复制控制器正常运行。