在谷歌容器引擎Kubernetes我有3个节点,每个节点有3.75 GB的内存
现在我也有一个从单个端点调用的api。这个端点在这样的mongodb中进行批量插入。
IMongoCollection<T> stageCollection = Database.GetCollection<T>(StageName);
foreach (var batch in entites.Batch(1000))
{
await stageCollection.InsertManyAsync(batch);
}
现在它经常发生,然后我们最终出现在内存场景中。
一方面,我们将wiredTigerCacheSizeGB限制为1.5,另一方面我们为pod定义了一个资源限制。
但结果仍然相同。 对我来说,看起来mongodb并不知道节点pod有内存限制。 这是一个已知的问题?如何处理它,而不是缩放到“怪物”引擎?
配置yaml看起来像这样:
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo:3.6
command:
- mongod
- "--replSet"
- rs0
- "--bind_ip"
- "0.0.0.0"
- "--noprealloc"
- "--wiredTigerCacheSizeGB"
- "1.5"
resources:
limits:
memory: "2Gi"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "fast"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 32Gi
更新
同时我还配置了pod antiaffinity以确保在mongo db运行的节点上我们没有任何ram干扰。但我们仍然得到了oom场景 -