Docker-compose to K8s:如何解决“与 Pod 的节点关联性不匹配”?

时间:2021-05-03 04:18:05

标签: docker-compose kubernetes-pod

这是我在 Hyperledger Fabric - migration from Docker swarm to Kubernetes possible?

提出的问题的延续

在我的 docker-compose 文件上运行 kompose convert 后,我获得的文件与我接受的答案中列出的完全相同。然后我按顺序运行以下命令:

$ kubectl apply -f dev-orderer1-pod.yaml
$ kubectl apply -f dev-orderer1-service.yaml
$ kubectl apply -f dev-peer1-pod.yaml
$ kubectl apply -f dev-peer1-service.yaml
$ kubectl apply -f dev-couchdb1-pod.yaml
$ kubectl apply -f dev-couchdb1-service.yaml
$ kubectl apply -f ar2bc-networkpolicy.yaml

当我尝试查看我的 Pod 时,我看到:

$ kubectl get pod
NAME           READY   STATUS    RESTARTS   AGE
dev-couchdb1   0/1     Pending   0          7m20s
dev-orderer1   0/1     Pending   0          8m25s
dev-peer1      0/1     Pending   0          7m39s

当我尝试描述三个豆荚中的任何一个时,我看到了:

$ kubectl describe pod dev-orderer1
Name:         dev-orderer1
Namespace:    default
Priority:     0
Node:         <none>
Labels:       io.kompose.network/ar2bc=true
              io.kompose.service=dev-orderer1
Annotations:  kompose.cmd: kompose convert -f docker-compose-orderer1.yaml -f docker-compose-peer1.yaml --volumes hostPath
              kompose.version: 1.22.0 (955b78124)
Status:       Pending
IP:
IPs:          <none>
Containers:
  dev-orderer1:
    Image:      hyperledger/fabric-orderer:latest
    Port:       7050/TCP
    Host Port:  0/TCP
    Args:
      orderer
    Environment:
      ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE:  /var/hyperledger/orderer/tls/server.crt
      ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY:   /var/hyperledger/orderer/tls/server.key
      ORDERER_GENERAL_CLUSTER_ROOTCAS:            [/var/hyperledger/orderer/tls/ca.crt]
      ORDERER_GENERAL_GENESISFILE:                /var/hyperledger/orderer/orderer.genesis.block
      ORDERER_GENERAL_GENESISMETHOD:              file
      ORDERER_GENERAL_LISTENADDRESS:              0.0.0.0
      ORDERER_GENERAL_LOCALMSPDIR:                /var/hyperledger/orderer/msp
      ORDERER_GENERAL_LOCALMSPID:                 OrdererMSP
      ORDERER_GENERAL_LOGLEVEL:                   INFO
      ORDERER_GENERAL_TLS_CERTIFICATE:            /var/hyperledger/orderer/tls/server.crt
      ORDERER_GENERAL_TLS_ENABLED:                true
      ORDERER_GENERAL_TLS_PRIVATEKEY:             /var/hyperledger/orderer/tls/server.key
      ORDERER_GENERAL_TLS_ROOTCAS:                [/var/hyperledger/orderer/tls/ca.crt]
    Mounts:
      /var/hyperledger/orderer/msp from dev-orderer1-hostpath1 (rw)
      /var/hyperledger/orderer/orderer.genesis.block from dev-orderer1-hostpath0 (rw)
      /var/hyperledger/orderer/tls from dev-orderer1-hostpath2 (rw)
      /var/hyperledger/production/orderer from orderer1 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-44lfq (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  dev-orderer1-hostpath0:
    Type:          HostPath (bare host directory volume)
    Path:          /home/isprintsg/hlf/channel-artifacts/genesis.block
    HostPathType:
  dev-orderer1-hostpath1:
    Type:          HostPath (bare host directory volume)
    Path:          /home/isprintsg/hlf/crypto-config/ordererOrganizations/ar2dev.accessreal.com/orderers/orderer1.ar2dev.accessreal.com/msp
    HostPathType:
  dev-orderer1-hostpath2:
    Type:          HostPath (bare host directory volume)
    Path:          /home/isprintsg/hlf/crypto-config/ordererOrganizations/ar2dev.accessreal.com/orderers/orderer1.ar2dev.accessreal.com/tls
    HostPathType:
  orderer1:
    Type:          HostPath (bare host directory volume)
    Path:          /home/isprintsg/hlf
    HostPathType:
  default-token-44lfq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-44lfq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  kubernetes.io/hostname=isprintdev
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  51s (x27 over 27m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match Pod's node affinity.

最后的错误消息对所有三个 Pod 都是通用的。我试着用谷歌搜索这条消息,但令人惊讶的是我没有得到任何直接的结果。该消息是什么意思,我应该如何解决这个问题?如果您想知道,我对 Kubernetes 还很陌生。


编辑

1 个答案:

答案 0 :(得分:0)

我在研究一个并行问题时偶然发现了这个问题..如果它有帮助,我想这就是你的问题:

Node-Selectors:  kubernetes.io/hostname=isprintdev

节点选择器告诉 Kubernetes 只在主机名为 isprintdev 的节点上调度这些 Pod :)

D