Openshift中的JCS集群

时间:2020-08-18 21:30:43

标签: kubernetes openshift

我正在尝试设置一个部署在Openshift集群上的JCS分布式缓存(根据FT的最佳实践,该缓存具有3个节点)。 3个缓存实例中的任何一个都可以接收一个事件,并且此事件通过TCP连接分发到其他实例(以同步状态)。 JCS的配置如下(假设集群中有3个节点A,B,C)

  • jcs.auxiliary.attributes.TcpListenerPort =节点-主机:2001 ( TcpListenerPort->是Node-A中接收缓存事件的本地端口。 相应的配置将出现在其他节点中,

                  - jcs.auxiliary.attributes.TcpListenerPort=Node-B-Host:2002
                  - jcs.auxiliary.attributes.TcpListenerPort=Node-C-Host:2003
    

  • jcs.auxiliary.attributes.TcpServers =节点B主机:2002,节点C主机:2003 (TcpServers->是接收实例将缓存事件分发到的列表)

我在这里面临的问题是

  • 当我们在Openshift中部署应用程序时,我们不知道Pod(缓存实例)将位于哪个节点上。这使我无法为缓存实例配置TCP参数。

只是想知道在Openshift / K8s平台上是否有任何可靠的方法可以解决此问题。谢谢。

1 个答案:

答案 0 :(得分:0)

您正在寻找StatefulSet

您目前无法设置tcpServers列表的原因是,Pods主机名在由DeploymentsReplicaSets,...管理时是不可预测的。

请考虑以下示例:

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: openldap-kube
  namespace: ci
spec:
  podManagementPolicy: OrderedReady
  replicas: 3
  selector:
    matchLabels:
      name: openldap-kube
  serviceName: openldap-kube
  template:
    metadata:
      labels:
        name: openldap-kube
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: name
                operator: In
                values:
                - openldap-kube
            topologyKey: kubernetes.io/hostname
      containers:
      [ ... ]
        volumeMounts:
        - mountPath: /etc/ldap
          name: data
          subPath: config
        - mountPath: /var/lib/ldap
          name: data
          subPath: db
        - mountPath: /run
          name: run
      volumes:
      - emptyDir: {}
        name: run
  updateStrategy:
    type: RollingUpdate
  volumeClaimTemplates:
  - apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: data
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 8Gi
---
apiVersion: v1
kind: Service
metadata:
  name: openldap-kube
  namespace: ci
spec:
  clusterIP: None
  ports:
  - name: tcp-1389
    port: 1389
  - name: tcp-1636
    port: 1636
  selector:
    name: openldap-kube
  type: ClusterIP

在这种情况下(StatefulSet,带有spec.serviceNameServicespec.type=ClusterIPspec.clusterIP=None),我的每个Pod都会有自己的DNS记录。将服务命名为openldap-kube,命名空间为ci之后,我将在SDN中获得以下DNS记录:

$ getent hosts openldap-kube-0.openldap-kube.ci.svc.cluster.local
10.233.99.125   openldap-kube-0.openldap-kube.ci.svc.cluster.local
$ getent hosts openldap-kube-1.openldap-kube.ci.svc.cluster.local
10.233.114.206  openldap-kube-1.openldap-kube.ci.svc.cluster.local openldap-kube-1
$ getent hosts openldap-kube.ci.svc.cluster.local
10.233.114.206  openldap-kube.ci.svc.cluster.local
10.233.99.125   openldap-kube.ci.svc.cluster.local
$ getent hosts openldap-kube                     
10.233.114.206  openldap-kube.ci.svc.cluster.local
10.233.99.125   openldap-kube.ci.svc.cluster.local
$ getent hosts openldap-kube-1.openldap-kube
10.233.114.206  openldap-kube-1.openldap-kube.ci.svc.cluster.local

有关一些详尽的示例,请参阅:

https://gitlab.com/synacksynack/opsperator/docker-openldap/-/tree/master/deploy/kubernetes https://gitlab.com/synacksynack/opsperator/docker-percona/-/tree/master/deploy/kubernetes https://gitlab.com/synacksynack/opsperator/docker-mongodb/-/tree/master/deploy/kubernetes