我正在AKS集群上安装kong ingress controller,但是我不想在集群中安装Postgres Statefulset服务。相反,我在azure基础结构中有一个postgres数据库,我想通过kong-ingress-controller部署将其连接起来,在aks集群中创建诸如秘密之类的postgres凭据,并将其存储在环境变量中。
我已经创建了秘密
⟩ kubectl create secret generic az-pg-db-user-pass --from-literal=username='az-pg-username' --from-literal=password='az-pg-password' --namespace kong
secret/az-pg-db-user-pass created
在我的kongwithingress.yaml
文件中,我有部署清单声明,我确实想提交from this gist link,以解决很多yaml
的正文问题代码行。
此要点全部基于AKS部署,但是由于先前的原因,删除了Statefulset
和Service
之类的postgres,我的目标是建立与我自己的Azure管理的postgres服务的连接>
我已经配置了在az-pg-db-user-pass
中创建的kong-ingress-controller deployment
通用秘密,并且在我的整个gist脚本中显示了我的kong deployment
和kong-migrations job
,以便创建环境变量例如:
KONG_PG_USERNAME
KONG_PG_PASSWORD
这些环境变量已创建并作为kong-ingress-controller deployment
和kong deployment
和kong-migrations job
中的秘密进行引用,它们需要访问或与postgres数据库连接
执行kubectl apply -f kongwithingres.yaml
命令时,得到以下输出:
kong-ingress-controller deployment
,kong deployment
和kong-migrations job
已成功创建。
⟩ kubectl apply -f kongwithingres.yaml
namespace/kong unchanged
customresourcedefinition.apiextensions.k8s.io/kongplugins.configuration.konghq.com unchanged
customresourcedefinition.apiextensions.k8s.io/kongconsumers.configuration.konghq.com unchanged
customresourcedefinition.apiextensions.k8s.io/kongcredentials.configuration.konghq.com unchanged
customresourcedefinition.apiextensions.k8s.io/kongingresses.configuration.konghq.com unchanged
serviceaccount/kong-serviceaccount unchanged
clusterrole.rbac.authorization.k8s.io/kong-ingress-clusterrole unchanged
role.rbac.authorization.k8s.io/kong-ingress-role unchanged
rolebinding.rbac.authorization.k8s.io/kong-ingress-role-nisa-binding unchanged
clusterrolebinding.rbac.authorization.k8s.io/kong-ingress-clusterrole-nisa-binding unchanged
service/kong-ingress-controller created
deployment.extensions/kong-ingress-controller created
service/kong-proxy created
deployment.extensions/kong created
job.batch/kong-migrations created
[I]
但是它们各自的豆荚状态为CrashLoopBackOff
NAME READY STATUS RESTARTS AGE
pod/kong-d8b88df99-j6hvl 0/1 Init:CrashLoopBackOff 5 4m24s
pod/kong-ingress-controller-984fc9666-cd2b5 0/2 Init:CrashLoopBackOff 5 4m24s
pod/kong-migrations-t6n7p 0/1 CrashLoopBackOff 5 4m24s
我正在检查每个吊舱的相应日志,发现了这一点:
pod/kong-d8b88df99-j6hvl
:
⟩ kubectl logs pod/kong-d8b88df99-j6hvl -p -n kong
Error from server (BadRequest): previous terminated container "kong-proxy" in pod "kong-d8b88df99-j6hvl" not found
此pod在他们的描述信息中获取了环境变量和图像
⟩ kubectl describe pod/kong-d8b88df99-j6hvl -n kong
Name: kong-d8b88df99-j6hvl
Namespace: kong
Status: Pending
IP: 10.244.1.18
Controlled By: ReplicaSet/kong-d8b88df99
Init Containers:
wait-for-migrations:
Container ID: docker://7007a89ada215daf853ec103d79dca60ccc5fb3a14c51ac6c5c56655da6da62f
Image: kong:1.0.0
Image ID: docker-pullable://kong@sha256:8fd6a312d7715a9cc85c49625a4c2f53951f6e4422926091e4d2ae67c480b6d5
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
kong migrations list
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 26 Feb 2019 16:25:01 +0100
Finished: Tue, 26 Feb 2019 16:25:01 +0100
Ready: False
Restart Count: 6
Environment:
KONG_ADMIN_LISTEN: off
KONG_PROXY_LISTEN: off
KONG_PROXY_ACCESS_LOG: /dev/stdout
KONG_ADMIN_ACCESS_LOG: /dev/stdout
KONG_PROXY_ERROR_LOG: /dev/stderr
KONG_ADMIN_ERROR_LOG: /dev/stderr
KONG_PG_HOST: zcrm365-postgresql1.postgres.database.azure.com
KONG_PG_USERNAME: <set to the key 'username' in secret 'az-pg-db-user-pass'> Optional: false
KONG_PG_PASSWORD: <set to the key 'password' in secret 'az-pg-db-user-pass'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gnkjq (ro)
Containers:
kong-proxy:
Container ID:
Image: kong:1.0.0
Image ID:
Ports: 8000/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
KONG_PG_USERNAME: <set to the key 'username' in secret 'az-pg-db-user-pass'> Optional: false
KONG_PG_PASSWORD: <set to the key 'password' in secret 'az-pg-db-user-pass'> Optional: false
KONG_PG_HOST: zcrm365-postgresql1.postgres.database.azure.com
KONG_PROXY_ACCESS_LOG: /dev/stdout
KONG_PROXY_ERROR_LOG: /dev/stderr
KONG_ADMIN_LISTEN: off
KUBERNETES_PORT_443_TCP_ADDR: zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
KUBERNETES_PORT: tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
KUBERNETES_PORT_443_TCP: tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
KUBERNETES_SERVICE_HOST: zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gnkjq (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-gnkjq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gnkjq
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m44s default-scheduler Successfully assigned kong/kong-d8b88df99-j6hvl to aks-default-75800594-1
Normal Pulled 7m9s (x5 over 8m40s) kubelet, aks-default-75800594-1 Container image "kong:1.0.0" already present on machine
Normal Created 7m8s (x5 over 8m40s) kubelet, aks-default-75800594-1 Created container
Normal Started 7m7s (x5 over 8m40s) kubelet, aks-default-75800594-1 Started container
Warning BackOff 3m34s (x26 over 8m38s) kubelet, aks-default-75800594-1 Back-off restarting failed container
pod/kong-ingress-controller-984fc9666-cd2b5
:
kubectl logs pod/kong-ingress-controller-984fc9666-cd2b5 -p -n kong
Error from server (BadRequest): a container name must be specified for pod kong-ingress-controller-984fc9666-cd2b5, choose one of: [admin-api ingress-controller] or one of the init containers: [wait-for-migrations]
[I]
及其各自的描述
⟩ kubectl describe pod/kong-ingress-controller-984fc9666-cd2b5 -n kong
Name: kong-ingress-controller-984fc9666-cd2b5
Namespace: kong
Status: Pending
IP: 10.244.2.18
Controlled By: ReplicaSet/kong-ingress-controller-984fc9666
Init Containers:
wait-for-migrations:
Container ID: docker://8eb035f755322b3ac72792d922974811933ba9a71afb1f4549cfe7e0a6519619
Image: kong:1.0.0
Image ID: docker-pullable://kong@sha256:8fd6a312d7715a9cc85c49625a4c2f53951f6e4422926091e4d2ae67c480b6d5
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
kong migrations list
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 26 Feb 2019 16:29:56 +0100
Finished: Tue, 26 Feb 2019 16:29:56 +0100
Ready: False
Restart Count: 7
Environment:
KONG_ADMIN_LISTEN: off
KONG_PROXY_LISTEN: off
KONG_PROXY_ACCESS_LOG: /dev/stdout
KONG_ADMIN_ACCESS_LOG: /dev/stdout
KONG_PROXY_ERROR_LOG: /dev/stderr
KONG_ADMIN_ERROR_LOG: /dev/stderr
KONG_PG_HOST: zcrm365-postgresql1.postgres.database.azure.com
KONG_PG_USERNAME: <set to the key 'username' in secret 'az-pg-db-user-pass'> Optional: false
KONG_PG_PASSWORD: <set to the key 'password' in secret 'az-pg-db-user-pass'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kong-serviceaccount-token-rc4sp (ro)
Containers:
admin-api:
Container ID:
Image: kong:1.0.0
Image ID:
Port: 8001/TCP
Host Port: 0/TCP
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Liveness: http-get http://:8001/status delay=30s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8001/status delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
KONG_PG_USERNAME: <set to the key 'username' in secret 'az-pg-db-user-pass'> Optional: false
KONG_PG_PASSWORD: <set to the key 'password' in secret 'az-pg-db-user-pass'> Optional: false
KONG_PG_HOST: zcrm365-postgresql1.postgres.database.azure.com
KONG_ADMIN_ACCESS_LOG: /dev/stdout
KONG_ADMIN_ERROR_LOG: /dev/stderr
KONG_ADMIN_LISTEN: 0.0.0.0:8001, 0.0.0.0:8444 ssl
KONG_PROXY_LISTEN: off
KUBERNETES_PORT_443_TCP_ADDR: zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
KUBERNETES_PORT: tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
KUBERNETES_PORT_443_TCP: tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
KUBERNETES_SERVICE_HOST: zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kong-serviceaccount-token-rc4sp (ro)
ingress-controller:
Container ID:
Image: kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.3.0
Image ID:
Port: <none>
Host Port: <none>
Args:
/kong-ingress-controller
--kong-url=https://localhost:8444
--admin-tls-skip-verify
--default-backend-service=kong/kong-proxy
--publish-service=kong/kong-proxy
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Liveness: http-get http://:10254/healthz delay=30s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:10254/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: kong-ingress-controller-984fc9666-cd2b5 (v1:metadata.name)
POD_NAMESPACE: kong (v1:metadata.namespace)
KUBERNETES_PORT_443_TCP_ADDR: zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
KUBERNETES_PORT: tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
KUBERNETES_PORT_443_TCP: tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
KUBERNETES_SERVICE_HOST: zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kong-serviceaccount-token-rc4sp (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
kong-serviceaccount-token-rc4sp:
Type: Secret (a volume populated by a Secret)
SecretName: kong-serviceaccount-token-rc4sp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12m default-scheduler Successfully assigned kong/kong-ingress-controller-984fc9666-cd2b5 to aks-default-75800594-2
Normal Pulled 10m (x5 over 12m) kubelet, aks-default-75800594-2 Container image "kong:1.0.0" already present on machine
Normal Created 10m (x5 over 12m) kubelet, aks-default-75800594-2 Created container
Normal Started 10m (x5 over 12m) kubelet, aks-default-75800594-2 Started container
Warning BackOff 2m14s (x49 over 12m) kubelet, aks-default-75800594-2 Back-off restarting failed container
[I]
~/workspace/ZCRM365/Deployments/Kubernetes/kong · (Deployments±)
⟩
我不知道CrashLoopBackOff状态及其状态分别为Waiting: PodInitiazing
的原因
如何调试此行为? Kong可能无法与Postgres数据库对话吗?
我的AKS群集位于Azure上,也位于我的postgres数据库上,它们具有作为服务的通信。
更新
这些是我创建的容器容器的日志:
⟩ kubectl logs pod/kong-ingress-controller-984fc9666-w4vvn -p -n kong -c ingress-controller
Error from server (BadRequest): previous terminated container "ingress-controller" in pod "kong-ingress-controller-984fc9666-w4vvn" not found
[I]
⟩ kubectl logs pod/kong-d8b88df99-qsq4j -p -n kong -c kong-proxy
Error from server (BadRequest): previous terminated container "kong-proxy" in pod "kong-d8b88df99-qsq4j" not found
[I]
~/workspace/ZCRM365/Deployments/Kubernetes/kong · (Deployments±)
⟩
答案 0 :(得分:3)
我的kong-ingress-controller
部署容器是CrashLoopBackOff
,有时是Waiting: PodInitiazing
,因为我不介意某些事情,例如:
诸如@Amityo,kong-ingress-controller
和kong
之类的主要原因被称为init-container
-等待迁移,等待{{ 1}}作业之前要执行。在这里,我可以确定执行我的香港迁移所必需的
但是我的kong-migrations
工作不起作用,因为我没有kong-migrations
环境变量参数来设置连接。
我的部署无法正常工作的其他原因是因为kong在内部与postgres连接可能会等待容器中定义的用户环境变量被称为KONG_DATABASE
。我被称为KONG_PG_USER
,这是执行脚本失败的另一个原因。 (我对此不太确定)
KONG_PG_USERNAME
顺便说一句,从kong开始,我建议安装konga,这是一个前端仪表板工具,用于管理kong和检查我们可以通过⟩ kubectl create -f kongwithingres.yaml
namespace/kong created
secret/az-pg-db-user-pass created
customresourcedefinition.apiextensions.k8s.io/kongplugins.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongconsumers.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongcredentials.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongingresses.configuration.konghq.com created
serviceaccount/kong-serviceaccount created
clusterrole.rbac.authorization.k8s.io/kong-ingress-clusterrole created
role.rbac.authorization.k8s.io/kong-ingress-role created
rolebinding.rbac.authorization.k8s.io/kong-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/kong-ingress-clusterrole-nisa-binding created
service/kong-ingress-controller created
deployment.extensions/kong-ingress-controller created
service/kong-proxy created
deployment.extensions/kong created
job.batch/kong-migrations created
[I]
~/workspace/ZCRM365/Deployments/Kubernetes/kong · (Deployments)
文件进行的操作。
我们已经像部署在我们的kubernetes集群中一样安装了yaml
脚本
konga.yaml
而且,我们可以通过apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: konga
namespace: kong
spec:
replicas: 1
template:
metadata:
labels:
app: konga
spec:
containers:
- env:
- name: NODE_TLS_REJECT_UNAUTHORIZED
value: "0"
image: pantsel/konga:latest
name: konga
ports:
- containerPort: 1337
命令
kubectl port-forward
我们要去http://localhost:1337/,并且需要注册管理员用户才能使用konga。
使用以下kong管理员URL http://kong-ingress-controller:8001/