无法将GRPC微服务与客户端连接-k8s上的Nest JS节点

时间:2020-07-25 19:20:18

标签: node.js kubernetes grpc grpc-node linkerd

我目前一直在连接kubernetes中的clusterIp服务。主要目标是使用grpc连接一个pod(微服务),并使用node连接另一个pod(客户端)。 我正在使用服务名称来公开并连接到微服务产品-微服务,但是在尝试在客户端上调用微服务时遇到此错误。

"Error: 14 UNAVAILABLE: failed to connect to all addresses",
            "    at Object.exports.createStatusError (/usr/src/app/node_modules/grpc/src/common.js:91:15)",
            "    at Object.onReceiveStatus (/usr/src/app/node_modules/grpc/src/client_interceptors.js:1209:28)",
            "    at InterceptingListener._callNext (/usr/src/app/node_modules/grpc/src/client_interceptors.js:568:42)",
            "    at InterceptingListener.onReceiveStatus (/usr/src/app/node_modules/grpc/src/client_interceptors.js:618:8)",
            "    at callback (/usr/src/app/node_modules/grpc/src/client_interceptors.js:847:24)"

我查看了我创建的docker映像,它指向此地址url:'0.0.0.0:50051',但未按照本文的建议https://kubernetes.io/blog/2018/11/07/grpc-load-balancing-on-kubernetes-without-tears/使用 到目前为止,我仅有一种产品微服务,其中包含用于管理产品的逻辑,并且是通过node-js和grpc开发的(在本地工作得很好)。我命名为xxx-microservice-products-deployment,并将其定义包含在k8s中,如下所示:

kind: Deployment
metadata:
  name: pinebox-microservice-products-deployment
  labels:
    app: pinebox
    type: microservice
    domain: products
spec:
  template:
    metadata:
      name: pinebox-microservice-products-pod
      labels:
        app: pinebox
        type: microservice
        domain: products
    spec:
      containers:
        - name: pg-container
          image: postgres
          env:
            - name: POSTGRES_USER
              value: testuser
            - name: POSTGRES_PASSWORD
              value: testpass
            - name: POSTGRES_DB
              value: db_development
          ports:
            - containerPort: 5432
        - name: microservice-container
          image: registry.digitalocean.com/pinebox/pinebox-microservices-products:latest
      imagePullSecrets:
        - name: regcred
  replicas: 1
  selector:
    matchLabels:
      app: pinebox
      type: microservice
      domain: products

然后,为了连接到它们,我们创建了一个带有clusterIp的服务,该服务公开了50051,它们在k8s中的定义如下:

kind: Service
apiVersion: v1
metadata:
  name: pinebox-products-microservice
spec:
  selector:
    app: pinebox
    type: microservice
    domain: products
  ports:
    - targetPort: 50051
      port: 50051

现在,我们也在节点中创建一个客户端,其中包含api (get,post)方法,这些方法在幕后与微服务建立连接。我将客户端命名为xxx-api-main-app-deployment,其在k8s中的定义如下:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pinebox-api-main-app-deployment
  labels:
    app: pinebox
    type: api
    domain: main-app
    role: users-service
spec:
  template:
    metadata:
      name: pinebox-api-main-app-pod
      labels:
        app: pinebox
        type: api
        domain: main-app
        role: products-service
    spec:
      containers:
        - name: pinebox-api-main-app-container
          image: registry.digitalocean.com/pinebox/pinebox-main-app:latest
      imagePullSecrets:
        - name: regcred
  replicas: 1
  selector:
    matchLabels:
      app: pinebox
      type: api
      domain: main-app
      role: products-service

此外,我创建了一个导出api的服务,其k8s定义如下:

kind: Service
apiVersion: v1
metadata:
  name: pinebox-api-main-app-service
spec:
  selector:
    app: pinebox
    type: api
    domain: main-app
    role: products-service
  type: NodePort
  ports:
    - name: name-of-the-port
      port: 3333
      targetPort: 3333
      nodePort: 30003

直到这里,一切看起来都不错。所以我尝试与服务建立连接,但出现此错误

"Error: 14 UNAVAILABLE: failed to connect to all addresses",
            "    at Object.exports.createStatusError (/usr/src/app/node_modules/grpc/src/common.js:91:15)",
            "    at Object.onReceiveStatus (/usr/src/app/node_modules/grpc/src/client_interceptors.js:1209:28)",
            "    at InterceptingListener._callNext (/usr/src/app/node_modules/grpc/src/client_interceptors.js:568:42)",
            "    at InterceptingListener.onReceiveStatus (/usr/src/app/node_modules/grpc/src/client_interceptors.js:618:8)",
            "    at callback (/usr/src/app/node_modules/grpc/src/client_interceptors.js:847:24)"

我没有发现任何有用的方法来使其工作。有任何线索吗?

因此,在深入研究该问题的解决方案之后,我发现kubernetes团队建议使用linkerd来将连接转换为http,因为k8在这种情况下不起作用。因此,我遵循了https://kubernetes.io/blog/2018/11/07/grpc-load-balancing-on-kubernetes-without-tears/的帖子,然后转到linkerd指南并遵循安装步骤。 现在,我可以看到linkeird仪表板,但是无法与客户端进行微服务通信。因此,我尝试检查端口是否在客户端吊舱中暴露出来,因此,我使用以下命令进行验证:

$ kubectl exec -i -t pod/pinebox-api-main-app-deployment-5fb5d4bf9f-ttwn5 --container pinebox-api-
main-app-container -- /bin/bash
$ pritnenv

这是输出:

PINEBOX_PRODUCTS_MICROSERVICE_PORT_50051_TCP_PORT=50051
KUBERNETES_SERVICE_PORT_HTTPS=443
PINEBOX_PRODUCTS_MICROSERVICE_SERVICE_PORT=50051
KUBERNETES_PORT_443_TCP_PORT=443
PINEBOX_API_MAIN_APP_SERVICE_SERVICE_PORT_NAME_OF_THE_PORT=3333
PORT=3000
NODE_VERSION=12.18.2
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
PINEBOX_API_MAIN_APP_SERVICE_PORT_3333_TCP_PORT=3333
PINEBOX_PRODUCTS_MICROSERVICE_SERVICE_HOST=10.105.230.111
TERM=xterm
PINEBOX_API_MAIN_APP_SERVICE_PORT=tcp://10.106.81.212:3333
SHLVL=1
PINEBOX_PRODUCTS_MICROSERVICE_PORT=tcp://10.105.230.111:50051
KUBERNETES_SERVICE_PORT=443
PINEBOX_PRODUCTS_MICROSERVICE_PORT_50051_TCP=tcp://10.105.230.111:50051
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PINEBOX_API_MAIN_APP_SERVICE_SERVICE_PORT=3333
KUBERNETES_SERVICE_HOST=10.96.0.1
_=/usr/bin/printenv
root@pinebox-api-main-app-deployment-5fb5d4bf9f-ttwn5:/usr/src/app# 

因此,如您所见,有and env变量包含服务的端口,因为它正在工作。我没有使用IP透明性,因为当我扩展部署以拥有更多资源时,它将不起作用。 然后,我使用以下命令验证我的微服务正在运行:

kubectl logs pod/xxx-microservice-products-deployment-78df57c96d-tlvvj -c microservice-container

这是输出:

[Nest] 1   - 07/25/2020, 4:23:22 PM   [NestFactory] Starting Nest application...
[Nest] 1   - 07/25/2020, 4:23:22 PM   [InstanceLoader] PineboxMicroservicesProductsDataAccessModule dependencies initialized +12ms
[Nest] 1   - 07/25/2020, 4:23:22 PM   [InstanceLoader] PineboxMicroservicesProductsFeatureShellModule dependencies initialized +0ms
[Nest] 1   - 07/25/2020, 4:23:22 PM   [InstanceLoader] AppModule dependencies initialized +0ms       
[Nest] 1   - 07/25/2020, 4:23:22 PM   [NestMicroservice] Nest microservice successfully started +22ms[Nest] 1   - 07/25/2020, 4:23:22 PM   Microservice Products is listening +15ms

一切看起来不错。因此,然后我在代码上重新验证我正在使用哪个端口:

  • 微服务
const microservicesOptions = {
 transport: Transport.GRPC,
 options: {
   url: '0.0.0.0:50051',
   credentials: ServerCredentials.createInsecure(),
   package: 'grpc.health.v1',
   protoPath: join(__dirname, 'assets/health.proto'),
 },
};
  • 客户:
ClientsModule.register([
     {
       name: 'HERO_PACKAGE',
       transport: Transport.GRPC,
       options: {
         url: '0.0.0.0:50051',
         package: 'grpc.health.v1',
         protoPath: join(__dirname, 'assets/health.proto'),
         // credentials: credentials.createInsecure()
       },
     },
   ])

然后,我决定检查为客户端运行的linkerd窗格中的日志。 kubectl logs pod/xxx-api-main-app-deployment-5fb5d4bf9f-ttwn5 -c linkerd-init 输出是这样的:

2020/07/25 16:37:50 Tracing this script execution as [1595695070]
2020/07/25 16:37:50 State of iptables rules before run:
2020/07/25 16:37:50 > iptables -t nat -vnL
2020/07/25 16:37:50 < Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
2020/07/25 16:37:50 > iptables -t nat -F PROXY_INIT_REDIRECT
2020/07/25 16:37:50 < iptables: No chain/target/match by that name.
2020/07/25 16:37:50 > iptables -t nat -X PROXY_INIT_REDIRECT
2020/07/25 16:37:50 < iptables: No chain/target/match by that name.
2020/07/25 16:37:50 Will ignore port(s) [4190 4191] on chain PROXY_INIT_REDIRECT
2020/07/25 16:37:50 Will redirect all INPUT ports to proxy
2020/07/25 16:37:50 > iptables -t nat -F PROXY_INIT_OUTPUT
2020/07/25 16:37:50 < iptables: No chain/target/match by that name.
2020/07/25 16:37:50 > iptables -t nat -X PROXY_INIT_OUTPUT
2020/07/25 16:37:50 < iptables: No chain/target/match by that name.
2020/07/25 16:37:50 Ignoring uid 2102
2020/07/25 16:37:50 Redirecting all OUTPUT to 4140
2020/07/25 16:37:50 Executing commands:
2020/07/25 16:37:50 > iptables -t nat -N PROXY_INIT_REDIRECT -m comment --comment proxy-init/redirect-common-chain/1595695070
2020/07/25 16:37:50 <
2020/07/25 16:37:50 > iptables -t nat -A PROXY_INIT_REDIRECT -p tcp --match multiport --dports 4190,4191 -j RETURN -m comment --comment proxy-init/ignore-port-4190,4191/1595695070
2020/07/25 16:37:50 <
2020/07/25 16:37:50 > iptables -t nat -A PROXY_INIT_REDIRECT -p tcp -j REDIRECT --to-port 4143 -m comment --comment proxy-init/redirect-all-incoming-to-proxy-port/1595695070
2020/07/25 16:37:50 <
2020/07/25 16:37:50 > iptables -t nat -A PREROUTING -j PROXY_INIT_REDIRECT -m comment --comment proxy-init/install-proxy-init-prerouting/1595695070
2020/07/25 16:37:50 <
2020/07/25 16:37:50 > iptables -t nat -N PROXY_INIT_OUTPUT -m comment --comment proxy-init/redirect-common-chain/1595695070
2020/07/25 16:37:50 <
2020/07/25 16:37:50 > iptables -t nat -A PROXY_INIT_OUTPUT -m owner --uid-owner 2102 -o lo ! -d 127.0.0.1/32 -j PROXY_INIT_REDIRECT -m comment --comment proxy-init/redirect-non-loopback-local-traffic/1595695070
2020/07/25 16:37:51 <
2020/07/25 16:37:51 > iptables -t nat -A PROXY_INIT_OUTPUT -m owner --uid-owner 2102 -j RETURN -m comment --comment proxy-init/ignore-proxy-user-id/1595695070
2020/07/25 16:37:51 <
2020/07/25 16:37:51 > iptables -t nat -A PROXY_INIT_OUTPUT -o lo -j RETURN -m comment --comment proxy-init/ignore-loopback/1595695070
2020/07/25 16:37:51 <
2020/07/25 16:37:51 > iptables -t nat -A PROXY_INIT_OUTPUT -p tcp -j REDIRECT --to-port 4140 -m comment --comment proxy-init/redirect-all-outgoing-to-proxy-port/1595695070
2020/07/25 16:37:51 <
2020/07/25 16:37:51 > iptables -t nat -A OUTPUT -j PROXY_INIT_OUTPUT -m comment --comment proxy-init/install-proxy-init-output/1595695070
2020/07/25 16:37:51 <
2020/07/25 16:37:51 > iptables -t nat -vnL
2020/07/25 16:37:51 < Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 PROXY_INIT_REDIRECT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* proxy-init/install-proxy-init-prerouting/1595695070 */
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 PROXY_INIT_OUTPUT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* proxy-init/install-proxy-init-output/1595695070 */
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
Chain PROXY_INIT_OUTPUT (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 PROXY_INIT_REDIRECT  all  --  *      lo      0.0.0.0/0           !127.0.0.1            owner UID match 2102 /* proxy-init/redirect-non-loopback-local-traffic/1595695070 */
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            owner UID match 2102 /* proxy-init/ignore-proxy-user-id/1595695070 */
    0     0 RETURN     all  --  *      lo      0.0.0.0/0            0.0.0.0/0            /* proxy-init/ignore-loopback/1595695070 */
    0     0 REDIRECT   tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* proxy-init/redirect-all-outgoing-to-proxy-port/1595695070 */ redir ports 4140
Chain PROXY_INIT_REDIRECT (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            multiport dports 4190,4191 /* proxy-init/ignore-port-4190,4191/1595695070 */
    0     0 REDIRECT   tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* proxy-init/redirect-all-incoming-to-proxy-port/1595695070 */ redir ports 4143
    ```
I'm not sure where the problem is, and thanks in advance for your help.
Hopefully this give you more context and you can point me out in the right direction.

1 个答案:

答案 0 :(得分:0)

从Linkerd proxy-init输出的iptables看起来不错。

您是否检查了容器内部linkerd-proxy的日志?这可能有助于了解正在发生的事情。

也值得尝试@KoopaKiller建议的port-forward测试