我是Kubernetes的新手。我有这种多租户方案
1)我有3个命名空间,如下所示:
default,
tenant1-namespace,
tenant2-namespace
2)命名空间default
有两个数据库Pod
tenant1-db - listening on port 5432
tenant2-db - listening on port 5432
命名空间tenant1-ns
具有一个应用程序窗格
tenant1-app - listening on port 8085
命名空间tenant2-ns
具有一个应用程序窗格
tenant2-app - listening on port 8085
3)我已在默认名称空间上应用了3个网络策略
a)限制从其他名称空间访问两个db pod
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
b)仅允许从tenant1-ns的tenant1-app访问tenant1-db pod
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces-except-specific-pod-1
namespace: default
spec:
podSelector:
matchLabels:
k8s-app: tenant1-db
ingress:
- from:
- namespaceSelector:
matchLabels:
name: tenant1-development
- podSelector:
matchLabels:
app: tenant1-app
c)仅允许从tenant2-ns的tenant2-app访问tenant2-db pod
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces-except-specific-pod-2
namespace: default
spec:
podSelector:
matchLabels:
k8s-app: tenant2-db
ingress:
- from:
- namespaceSelector:
matchLabels:
name: tenant2-development
- podSelector:
matchLabels:
app: tenant2-app
我想将tenant1-db的访问权限仅限制为tenant1-app,tenant2-db的访问权限仅限制为tenant2-app。但是看来tenant2-app可以访问tenant1-db,这是不应该发生的。
下面是tenant2-app的db-config.js
module.exports = {
HOST: "tenant1-db",
USER: "postgres",
PASSWORD: "postgres",
DB: "tenant1db",
dialect: "postgres",
pool: {
max: 5,
min: 0,
acquire: 30000,
idle: 10000
}
};
您可以看到我将tenant2-app指向使用tenant1-db,我只想将tennat1-db限制为tenant1-app?需要对网络策略进行哪些修改?
更新:
tenant1部署和服务Yamls
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: tenant1-app-deployment
namespace: tenant1-namespace
spec:
selector:
matchLabels:
app: tenant1-app
replicas: 1 # tells deployment to run 1 pods matching the template
template:
metadata:
labels:
app: tenant1-app
spec:
containers:
- name: tenant1-app-container
image: tenant1-app-dock-img:v1
ports:
- containerPort: 8085
---
# https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service
kind: Service
apiVersion: v1
metadata:
name: tenant1-app-service
namespace: tenant1-namespace
spec:
selector:
app: tenant1-app
ports:
- protocol: TCP
port: 8085
targetPort: 8085
nodePort: 31005
type: LoadBalancer
tenant2-app部署和服务Yamls
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: tenant2-app-deployment
namespace: tenant2-namespace
spec:
selector:
matchLabels:
app: tenant2-app
replicas: 1 # tells deployment to run 1 pods matching the template
template:
metadata:
labels:
app: tenant2-app
spec:
containers:
- name: tenant2-app-container
image: tenant2-app-dock-img:v1
ports:
- containerPort: 8085
---
# https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service
kind: Service
apiVersion: v1
metadata:
name: tenant2-app-service
namespace: tenant2-namespace
spec:
selector:
app: tenant2-app
ports:
- protocol: TCP
port: 8085
targetPort: 8085
nodePort: 31006
type: LoadBalancer
更新2:
db-pod1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
k8s-app: tenant1-db
name: tenant1-db
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: tenant1-db
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
k8s-app: tenant1-db
name: tenant1-db
spec:
volumes:
- name: tenant1-pv-storage
persistentVolumeClaim:
claimName: tenant1-pv-claim
containers:
- env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DB
value: tenant1db
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
image: postgres:11.5-alpine
imagePullPolicy: IfNotPresent
name: tenant1-db
volumeMounts:
- mountPath: "/var/lib/postgresql/data/pgdata"
name: tenant1-pv-storage
resources: {}
securityContext:
privileged: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
db-pod2.ymal
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
k8s-app: tenant2-db
name: tenant2-db
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: tenant2-db
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
k8s-app: tenant2-db
name: tenant2-db
spec:
volumes:
- name: tenant2-pv-storage
persistentVolumeClaim:
claimName: tenant2-pv-claim
containers:
- env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DB
value: tenant2db
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
image: postgres:11.5-alpine
imagePullPolicy: IfNotPresent
name: tenant2-db
volumeMounts:
- mountPath: "/var/lib/postgresql/data/pgdata"
name: tenant2-pv-storage
resources: {}
securityContext:
privileged: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
更新3:
kubectl get svc -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d2h
nginx ClusterIP 10.100.24.46 <none> 80/TCP 5d1h
tenant1-db LoadBalancer 10.111.165.169 10.111.165.169 5432:30810/TCP 4d22h
tenant2-db LoadBalancer 10.101.75.77 10.101.75.77 5432:30811/TCP 2d22h
kubectl get svc -n tenant1-namespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tenant1-app-service LoadBalancer 10.111.200.49 10.111.200.49 8085:31005/TCP 3d
tenant1-db ExternalName <none> tenant1-db.default.svc.cluster.local 5432/TCP 2d23h
kubectl get svc -n tenant2-namespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tenant1-db ExternalName <none> tenant1-db.default.svc.cluster.local 5432/TCP 2d23h
tenant2-app-service LoadBalancer 10.99.139.18 10.99.139.18 8085:31006/TCP 2d23h
答案 0 :(得分:2)
参考docs,让我们了解您对tenant2的以下政策。
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces-except-specific-pod-2
namespace: default
spec:
podSelector:
matchLabels:
k8s-app: tenant2-db
ingress:
- from:
- namespaceSelector:
matchLabels:
name: development
- podSelector:
matchLabels:
app: tenant2-app
您定义的上述网络策略在表单数组中有两个元素,表示允许来自本地(默认)名称空间中带有标签app=tenant2-app
的Pod的连接,或来自具有标签名称的任何名称空间中的任何Pod的连接name=development
。
如果您将规则合并为一个规则,如下所示,它将解决此问题。
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces-except-specific-pod-2
namespace: default
spec:
podSelector:
matchLabels:
k8s-app: tenant2-db
ingress:
- from:
- namespaceSelector:
matchLabels:
name: tenant2-development
podSelector:
matchLabels:
app: tenant2-app
以上网络策略表示允许在标签为app=tenant2-app
的命名空间中来自Pod的标签为name=tenant2-development
的连接。
在name=tenant2-development
命名空间中添加标签tenant2-ns
。
对租户1和下面做同样的运动:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces-except-specific-pod-1
namespace: default
spec:
podSelector:
matchLabels:
k8s-app: tenant1-db
ingress:
- from:
- namespaceSelector:
matchLabels:
name: tenant1-development
podSelector:
matchLabels:
app: tenant1-app
在name=tenant1-development
命名空间中添加标签tenant1-ns
。