如何在Kubeflow中公开推理管道REST API端点

时间:2019-10-21 02:18:51

标签: kubeflow kubeflow-pipelines

我已经实现了一个简单的一级ML推理管道。我已经从kubeflow UI加载了管道并创建了一个运行。一切似乎都还好-没有错误。

此管道包含一个组件,其中包含一个运行Flask并公开REST API的容器。我可以在Pod中打开Flask容器的命令提示符,然后在127.0.0.1上卷曲

我已经使用NodePort创建了一个服务来公开REST API,这似乎还可以。

但是卷曲到端口31128会使连接关闭。下面是pod describe

var tmp1 = $('.'+auctionId2+' .productValue').html();
                                tmp1 = tmp1.replace(',', '@');
                                tmp1 = tmp1.replace('.', ',');
                                tmp1 = tmp1.replace('@', '.');
                                tmp1 = parseFloat(tmp1);
                                tmp1 = tmp1 - bid_price;
                                var tmp2 = formatMoney(tmp1);
                                $('.'+auctionId2+' .auctionSavings').html(tmp2 + "€");

Nodeport服务详细信息

Name:           pipe1-llq4f-4023890614
Namespace:      kubeflow
Priority:       0
Node:           trainer/192.168.0.38
Start Time:     Thu, 17 Oct 2019 01:08:28 +0000
Labels:         workflows.argoproj.io/completed=false
                workflows.argoproj.io/workflow=pipe1-llq4f
Annotations:    pipelines.kubeflow.org/component_spec: {"name": "my_component"}
                workflows.argoproj.io/node-message:
                  Error response from daemon: No such container: e095eefba4123fd4b0df2a399ed6d385a668b65ee93419c427e2aae3bce97b32
                workflows.argoproj.io/node-name: pipe1-llq4f.infer
                workflows.argoproj.io/template:
                  {"name":"infer","inputs":{},"outputs":{},"metadata":{"annotations":{"pipelines.kubeflow.org/component_spec":"{\"name\": \"my_component\"}"...
Status:         Running
IP:             10.1.22.184
Controlled By:  Workflow/pipe1-llq4f
Containers:
  wait:
    Container ID:  containerd://e3e4987273b66da2d8e6f3d5af2b2280730fb533849f3430a4c2fa9f7fde9890
    Image:         argoproj/argoexec:v2.3.0
    Image ID:      docker.io/argoproj/argoexec@sha256:85132fc2c8bc373fca885df17637d5d35682a23de8d1390668a5e1c149f2f187
    Port:          <none>
    Host Port:     <none>
    Command:
      argoexec
      wait
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 17 Oct 2019 01:08:30 +0000
      Finished:     Thu, 17 Oct 2019 01:08:35 +0000
    Ready:          False
    Restart Count:  0
    Environment:
      ARGO_POD_NAME:  pipe1-llq4f-4023890614 (v1:metadata.name)
    Mounts:
      /argo/podmetadata from podmetadata (rw)
      /var/run/docker.sock from docker-sock (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from pipeline-runner-token-6gltt (ro)
  main:
    Container ID:   containerd://e095eefba4123fd4b0df2a399ed6d385a668b65ee93419c427e2aae3bce97b32
    Image:          praveen049/inf
    Image ID:       docker.io/praveen049/inf@sha256:1d6a236bba6d6ec634fbfc30092af70fe9c70c0b782d7bcbcb812cb33559bf09
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 17 Oct 2019 01:08:34 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from pipeline-runner-token-6gltt (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  podmetadata:
    Type:  DownwardAPI (a volume populated by information about the pod)
    Items:
      metadata.annotations -> annotations
  docker-sock:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/docker.sock
    HostPathType:  Socket
  pipeline-runner-token-6gltt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  pipeline-runner-token-6gltt
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

这是管道yaml


Name:                     infer
Namespace:                kubeflow
Labels:                   app=demo
                          name=infer
Annotations:              <none>
Selector:                 name=my_component
Type:                     NodePort
IP:                       10.152.183.21
Port:                     <unset>  88/TCP
TargetPort:               5000/TCP
NodePort:                 <unset>  31128/TCP
Endpoints:                <none>
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

关于我缺少的任何公开REST API并将curl传递给Node的建议?

谢谢

0 个答案:

没有答案