你如何在Kubernetes上设置Mongo副本?

时间:2015-12-10 17:20:38

标签: mongodb docker kubernetes

我想在Kubernetes上设置一个Mongo副本集。我想要三个副本。这意味着我需要启动3个实例。

我应该启动三个pod,每个都使用Mongo,并使用该服务指向主服务器吗?或者我应该以某种方式使用复制控制器?

7 个答案:

答案 0 :(得分:13)

这个答案已经过时了。我使用更新的方法编写了详细的分步教程here。我强烈建议您阅读所有内容。

简而言之,您运行sidecar应用程序为您配置副本集,并为每个实例使用服务或ping K8s API以获取pod IP地址。

示例:这只适用于Google Cloud。您需要对其他平台进行修改,尤其是在卷周围:

  1. 按照https://github.com/leportlabs/mongo-k8s-sidecar.git中的示例
    • git clone https://github.com/leportlabs/mongo-k8s-sidecar.git
    • cd mongo-k8s-sidecar/example/
    • make add-replica ENV=GoogleCloudPlatform(这样做三次)
  2. 通过服务连接到副本集。
    • mongodb://mongo-1,mongo-2,mongo-3:27017/dbname_?
  3. 您还可以使用原始pod IP地址,而不是为每个pod创建服务

答案 1 :(得分:5)

通常,要设置一组群集节点,例如mongo和副本集,您可以创建一个Service来跟踪服务名称下的pod(例如,创建一个带有标记的MongoDB复制控制器{ {1}}以及mongodb跟踪这些实例) 然后可以为其成员查询服务(使用API​​服务器,您可以使用

查找节点

Service

其中mongodb是服务名称的选择器。

返回带有一堆字段的JSON对象,因此解析这些字段的好方法是使用jq https://stedolan.github.io/jq/

将curl命令输入到像

这样的jq查询中

curl -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt https://kubernetes/api/v1/namespaces/default/endpoints/mongodb将返回群集中mongodb实例的IP和主机名。

现在您知道谁在群集中,您可以在init脚本中创建副本集。 显然,这意味着您需要首先启动jq '.subsets[].addresses[]' | jq '{ip: .ip, host:.targetRef.name}',您的启动脚本需要等待所有节点启动并注册到服务,然后您可以继续。 如果您使用一个图像,使用一个脚本,它将在每个节点上运行n,因此您需要检查副本集是否已存在或处理错误。要注册的第一个pod应该完成工作。 另一种选择是将所有节点作为单个节点运行,然后运行一个单独的引导脚本来创建副本集。

最后,然后调用mongodb集群,您需要确保指定具有副本集名称的URL作为选项:

Service

由于您不知道主服务器的IP,因此您可以通过服务mongodb://mongodb:27017/database?replicaSet=replicaSetName来调用它,这会将请求负载平衡到其中一个节点,如果您没有指定副本集名称,最终会出现连接错误,因为只有主服务器才能获得写请求。

显然这不是一步一步的教程,但我希望能让你开始。

答案 2 :(得分:2)

这是我目前正在运行的例子。

apiVersion: v1
kind: Service
metadata:
  labels:
    name: mongo
  name: mongo-svc1
spec:
  ports:
    - port: 27017
      targetPort: 27017
  selector:
    type: mongo-rs-A
---
apiVersion: v1
kind: Service
metadata:
  labels:
    name: mongo
  name: mongo-svc2
spec:
  ports:
    - port: 27017
      targetPort: 27017
  selector:
    type: mongo-rs-B
---
apiVersion: v1
kind: Service
metadata:
  labels:
    name: mongo
  name: mongo-svc3
spec:
  ports:
    - port: 27017
      targetPort: 27017
  selector:
    type: mongo-rs-C
---

apiVersion: v1
kind: ReplicationController

metadata:
  name: mongo

spec:
  replicas: 1
  selector:
    name: mongo-nodea
    role: mongo
    environment: test

  template:
    metadata:
      labels:
        name: mongo-nodea
        role: mongo
        environment: test
        type: mongo-rs-A
    spec:
      containers:
        - name: mongo-nodea
          image: mongo
          command:
            - mongod
            - "--replSet"
            - rsABC
            - "--smallfiles"
            - "--noprealloc"
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongo-persistent-storage
              mountPath: /data/db
      volumes:
        - name: mongo-persistent-storage
          flocker:
            datasetName: FlockerMongoVolSetA
---
apiVersion: v1
kind: ReplicationController

metadata:
  name: mongo-1

spec:
  replicas: 1
  selector:
    name: mongo-nodeb
    role: mongo
    environment: test

  template:
    metadata:
      labels:
        name: mongo-nodeb
        role: mongo
        environment: test
        type: mongo-rs-B
    spec:
      containers:
        - name: mongo-nodeb
          image: mongo
          command:
            - mongod
            - "--replSet"
            - rsABC
            - "--smallfiles"
            - "--noprealloc"
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongo-persistent-storage
              mountPath: /data/db
      volumes:
        - name: mongo-persistent-storage
          flocker:
            datasetName: FlockerMongoVolSetB
---
apiVersion: v1
kind: ReplicationController

metadata:
  name: mongo-2

spec:
  replicas: 1
  selector:
    name: mongo-nodec
    role: mongo
    environment: test

  template:
    metadata:
      labels:
        name: mongo-nodec
        role: mongo
        environment: test
        type: mongo-rs-C
    spec:
      containers:
        - name: mongo-nodec
          image: mongo
          command:
            - mongod
            - "--replSet"
            - rsABC
            - "--smallfiles"
            - "--noprealloc"
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongo-persistent-storage
              mountPath: /data/db
      volumes:
        - name: mongo-persistent-storage
          flocker:
            datasetName: FlockerMongoVolSetC


kubectl --kubeconfig=clusters/k8s-mongo/kubeconfig get po,svc -L type,role,name
NAME            READY     STATUS    RESTARTS   AGE       TYPE         ROLE      NAME
mongo-1-39nuw   1/1       Running   0          1m        mongo-rs-B   mongo     mongo-nodeb
mongo-2-4tgho   1/1       Running   0          1m        mongo-rs-C   mongo     mongo-nodec
mongo-rk9n8     1/1       Running   0          1m        mongo-rs-A   mongo     mongo-nodea
NAME         CLUSTER_IP   EXTERNAL_IP   PORT(S)     SELECTOR          AGE       TYPE      ROLE      NAME
kubernetes   10.3.0.1     <none>        443/TCP     <none>            21h       <none>    <none>    <none>
mongo-svc1   10.3.0.28    <none>        27017/TCP   type=mongo-rs-A   1m        <none>    <none>    mongo
mongo-svc2   10.3.0.56    <none>        27017/TCP   type=mongo-rs-B   1m        <none>    <none>    mongo
mongo-svc3   10.3.0.47    <none>        27017/TCP   type=mongo-rs-C   1m        <none>    <none>    mongo

在主节点上,我将进入mongo shell

  

rs.status()   rs.initiate()   rs.add(&#34; 10.3.0.56:27017&#34)

我目前遇到了这个问题,我遇到了没有主要节点的两个节点的辅助和启动状态。

rs.status()
{
    "set" : "rsABC",
    "date" : ISODate("2016-01-21T22:51:33.216Z"),
    "myState" : 2,
    "term" : NumberLong(1),
    "heartbeatIntervalMillis" : NumberLong(2000),
    "members" : [
        {
            "_id" : 0,
            "name" : "mongo-rk9n8:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 242,
            "optime" : {
                "ts" : Timestamp(1453416638, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2016-01-21T22:50:38Z"),
            "infoMessage" : "could not find member to sync from",
            "configVersion" : 2,
            "self" : true
        },
        {
            "_id" : 1,
            "name" : "10.3.0.56:27017",
            "health" : 1,
            "state" : 0,
            "stateStr" : "STARTUP",
            "uptime" : 45,
            "optime" : {
                "ts" : Timestamp(0, 0),
                "t" : NumberLong(-1)
            },
            "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
            "lastHeartbeat" : ISODate("2016-01-21T22:51:28.639Z"),
            "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
            "pingMs" : NumberLong(40),
            "configVersion" : -2
        }
    ],
    "ok" : 1
}

答案 3 :(得分:1)

点击下面的链接查看。在kubernetes中,创建服务地址,然后控制器和replicaset启动可以很容易地生成.... https://www.mongodb.com/blog/post/running-mongodb-as-a-microservice-with-docker-and-kubernetes

答案 4 :(得分:0)

@Stephen Nguyen

我只是复制你的案例并为它创建名称空间测试(我相应地更改了你的yaml文件),并通过以下方式初始化我的mongo:

rs.initiate({
     "_id" : "rsABC",
     "members" : [
          {
               "_id" : 0,
               "host" : "mongo-svc1.test:27017",
               "priority" : 10
          },
          {
               "_id" : 1,
               "host" : "mongo-svc2.test:27017",
               "priority" : 9
          },
          {
               "_id" : 2,
               "host" : "mongo-svc3.test:27017",
                "arbiterOnly" : true
          }
     ]
})

它似乎确实有效:

> rs.status()
{
        "set" : "rsABC",
        "date" : ISODate("2016-05-10T07:45:25.975Z"),
        "myState" : 2,
        "term" : NumberLong(2),
        "syncingTo" : "mongo-svc1.test:27017",
        "heartbeatIntervalMillis" : NumberLong(2000),
        "members" : [
                {
                        "_id" : 0,
                        "name" : "mongo-svc1.test:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 657,
                        "optime" : {
                                "ts" : Timestamp(1462865715, 2),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2016-05-10T07:35:15Z"),
                        "lastHeartbeat" : ISODate("2016-05-10T07:45:25.551Z"),
                        "lastHeartbeatRecv" : ISODate("2016-05-10T07:45:25.388Z"),
                        "pingMs" : NumberLong(0),
                        "electionTime" : Timestamp(1462865715, 1),
                        "electionDate" : ISODate("2016-05-10T07:35:15Z"),
                        "configVersion" : 1
                },
                {
                        "_id" : 1,
                        "name" : "mongo-svc2.test:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 1171,
                        "optime" : {
                                "ts" : Timestamp(1462865715, 2),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2016-05-10T07:35:15Z"),
                        "syncingTo" : "mongo-svc1.test:27017",
                        "configVersion" : 1,
                        "self" : true
                },
                {
                        "_id" : 2,
                        "name" : "mongo-svc3.test:27017",
                        "health" : 1,
                        "state" : 7,
                        "stateStr" : "ARBITER",
                        "uptime" : 657,
                        "lastHeartbeat" : ISODate("2016-05-10T07:45:25.549Z"),
                        "lastHeartbeatRecv" : ISODate("2016-05-10T07:45:23.969Z"),
                        "pingMs" : NumberLong(0),
                        "configVersion" : 1
                }
        ],
        "ok" : 1
}

我按服务名称添加mongo节点。

答案 5 :(得分:0)

就像单挑一样。不要在生产中使用mongo-k8s-sidecar方法,因为它有潜在的危险后果。有关将MongoDB与k8s StatefulSets一起使用的更新方法,请参阅:

  1. Deploying a MongoDB Replica Set as a Kubernetes StatefulSet
  2. Configuring Some Key Production Settings for MongoDB on Kubernetes
  3. Using the Enterprise Version of MongoDB on Kubernetes
  4. Deploying a MongoDB Sharded Cluster using Kubernetes StatefulSets
  5. 有关MongoDB&amp ;;的更多信息Kubernetes可在以下网址获得:http://k8smongodb.net/

答案 6 :(得分:-1)

我正在使用它作为解决方案。尚未准备就绪。

设置MongoDB复制

获取所有MongoDB pod IP kubectl describe pod <PODNAME> | grep IP | sed -E 's/IP:[[:space:]]+//'

和...

运行kubectl exec -i <POD_1_NAME> mongo

和......

rs.initiate({ 
     "_id" : "cloudboost", 
     "version":1,
     "members" : [ 
          {
               "_id" : 0,
               "host" : "<POD_1_IP>:27017",
               "priority" : 10
          },
          {
               "_id" : 1,
               "host" : "<POD_2_IP>:27017",
               "priority" : 9
          },
          {
               "_id" : 2,
               "host" : "<POD_3_IP>:27017",
               "arbiterOnly" : true
          }
     ]
});

例如:

rs.initiate({  
     "_id" : "cloudboost",
     "version":1,
     "members" : [ 
          {
               "_id" : 0,
               "host" : "10.244.1.5:27017",
               "priority" : 10
          },
          {
               "_id" : 1,
               "host" : "10.244.2.6:27017",
               "priority" : 9
          },
          {
               "_id" : 2,
               "host" : "10.244.3.5:27017",
               "arbiterOnly" : true
          }
     ]
}); 

请注意:您的群集的IP可能不同。

TODO:创建无头服务以自动发现节点并初始化复制集。