我部署了具有 3 个副本的 hashicorp 保险库。 Pod vault-0 正在运行,但其他两个 Pod 处于挂起状态。 enter image description here
这是我的覆盖 yaml,
- (void)didUploadedAllMedias:(BOOL)yesOrNo {
if (yesOrNo) {
UIApplicationState state = [[UIApplication sharedApplication] applicationState];
if (state == UIApplicationStateBackground){
[self notifyUserForMedia:YES];
}
self.postStatus = BG_SERVICE_STATUS_FINISHING_UP;
if (self.delegate && [self.delegate respondsToSelector:@selector(finishingUP)]) {
[self.delegate finishingUP];
[self executeWebServiceCall];
}
}
else {
self.postStatus = BG_SERVICE_STATUS_ERROR;
UIApplicationState state = [[UIApplication sharedApplication] applicationState];
if (state == UIApplicationStateBackground){
[self notifyUserForMedia:YES];
}
}
}
- (void)executeWebServiceCall {
UIApplicationState state = [[UIApplication sharedApplication] applicationState];
if (state == UIApplicationStateBackground){
__weak typeof(self) weakSelf = self;
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(1.0 * NSEC_PER_SEC)), dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
__strong typeof(weakSelf) strongSelf = weakSelf;
if(strongSelf){
[strongSelf submitPost];
}
});
}else {
__weak typeof(self) weakSelf = self;
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(1.0 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
__strong typeof(weakSelf) strongSelf = weakSelf;
if(strongSelf){
[strongSelf submitPost];
}
});
}
}
是否对挂起的 Pod 进行了 kubectl 描述,可以看到以下状态消息。我不确定我是否在覆盖文件中添加了正确的关联设置。不知道我做错了什么。我正在使用 vault helm charts 部署到 docker 桌面本地集群。感谢您的帮助。
答案 0 :(得分:1)
您的 values.yaml 文件存在一些问题。
1.你设置
server:
auditStorage:
enabled: true
但是您没有指定如何创建 PVC 以及 Storage 类是什么。如果您启用存储,图表希望您这样做。看:https://github.com/hashicorp/vault-helm/blob/master/values.yaml#L446 如果您只是在本地机器上测试或指定存储配置,请将其设为 false。
2.您为注入器而不是服务器设置了空的亲和变量。设置
affinity: ""
也适用于服务器。看:https://github.com/hashicorp/vault-helm/blob/master/values.yaml#L347
3.未初始化和密封的 Vault 集群并不真正可用。您需要在 Vault 准备就绪之前对其进行初始化和解封。这意味着设置一个 readinessProbe
。像这样:
server:
readinessProbe:
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
4.最后一个,但这是可选的。那些内存请求:
resources:
requests:
memory: 4Gi
cpu: 1000m
limits:
memory: 8Gi
cpu: 1000m
有点偏高。设置 3 个副本的 HA 集群,每个副本请求 4Gi 内存可能会导致 Insufficient memory
错误 - 在本地集群上部署时最有可能发生。
但话又说回来,您的本地机器可能有 32 gig 的内存 - 我不知道;) 如果没有,请减少它们以适合您的机器。
所以以下值对我有用:
# Vault Helm Chart Value Overrides
global:
enabled: true
tlsDisable: true
injector:
enabled: true
# Use the Vault K8s Image https://github.com/hashicorp/vault-k8s/
image:
repository: "hashicorp/vault-k8s"
tag: "0.9.0"
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 256Mi
cpu: 250m
affinity: ""
server:
auditStorage:
enabled: false
standalone:
enabled: false
image:
repository: "hashicorp/vault"
tag: "1.6.3"
resources:
requests:
memory: 256Mi
cpu: 200m
limits:
memory: 512Mi
cpu: 400m
affinity: ""
readinessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
ha:
enabled: true
replicas: 3
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener "tcp" {
tls_disable = true
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "raft" {
path = "/vault/data"
}
service_registration "kubernetes" {}
config: |
ui = true
listener "tcp" {
tls_disable = true
address = "[::]:8200"
cluster_address = "[::]:8201"
}
service_registration "kubernetes" {}
# Vault UI
ui:
enabled: true
serviceType: "ClusterIP"
externalPort: 8200