我在Google cloud Kubernetes Engine
上部署了两个后端服务。
a)后端服务
b)需要与后端服务连接的管理门户
所有内容都可以在一个群集中使用。
与Workload / Pods
中一样,
我正在运行三个部署,而fitme:9000
是后端,而nginx-1:9000
是管理门户服务
1. D1 (fitme), D2 (mongo-mongodb), D3 (nginx-1) are three deployments
2. E1D1 (fitme-service), E2D1 (fitme-jr29g), E1D2 (mongo-mongodb), E2D2 (mongo-mongodb-rcwwc) and E1D3 (nginx-1-service) are Services
3. `E1D1, E1D2 and E1D3` are exposed over `Load Balancer` whereas `E2D1 , E2D2` are exposed over `Cluster IP`.
其背后的原因:
D1
需要访问D2
(内部)->正常工作。我正在使用E2D2
公开的服务(集群IP)从D2
D1
部署
现在,D3
需要访问D1
部署。因此,我将D1
公开为E2D1
服务,并试图通过生成的Cluster IP
的{{1}}内部访问它,但这给了E2D1
。
request time out
服务的YAML fitme-jr29g
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-12-02T11:18:55Z"
generateName: fitme-
labels:
app: fitme
name: fitme-jr29g
namespace: default
resourceVersion: "486673"
selfLink: /api/v1/namespaces/default/services/fitme-8t7rl
uid: 875045eb-14f5-11ea-823c-42010a8e0047
spec:
clusterIP: 10.35.240.95
ports:
- port: 9000
protocol: TCP
targetPort: 9000
selector:
app: fitme
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
服务的YAML nginx-1-service
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-12-02T11:30:10Z"
labels:
app: admin
name: nginx-1-service
namespace: default
resourceVersion: "489972"
selfLink: /api/v1/namespaces/default/services/admin-service
uid: 195b462e-14f7-11ea-823c-42010a8e0047
spec:
clusterIP: 10.35.250.90
externalTrafficPolicy: Cluster
ports:
- nodePort: 30628
port: 8080
protocol: TCP
targetPort: 9000
selector:
app: admin
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.227.26.101
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2019-12-02T11:24:09Z"
generation: 2
labels:
app: admin
name: admin
namespace: default
resourceVersion: "489624"
selfLink: /apis/apps/v1/namespaces/default/deployments/admin
uid: 426792e6-14f6-11ea-823c-42010a8e0047
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: admin
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: admin
spec:
containers:
- image: gcr.io/docker-226818/admin@sha256:602fe6b7e43d53251eebe2f29968bebbd756336c809cb1cd43787027537a5c8b
imagePullPolicy: IfNotPresent
name: admin-sha256
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2019-12-02T11:24:18Z"
lastUpdateTime: "2019-12-02T11:24:18Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2019-12-02T11:24:09Z"
lastUpdateTime: "2019-12-02T11:24:18Z"
message: ReplicaSet "admin-8d55dfbb6" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 2
readyReplicas: 1
replicas: 1
updatedReplicas: 1
的YAML fitme-service
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-12-02T13:38:21Z"
generateName: fitme-
labels:
app: fitme
name: fitme-service
namespace: default
resourceVersion: "525173"
selfLink: /api/v1/namespaces/default/services/drogo-mzcgr
uid: 01e8fc39-1509-11ea-823c-42010a8e0047
spec:
clusterIP: 10.35.240.74
externalTrafficPolicy: Cluster
ports:
- nodePort: 31016
port: 80
protocol: TCP
targetPort: 9000
selector:
app: fitme
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.236.110.230
我通过将 apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2019-12-02T13:34:54Z"
generation: 2
labels:
app: fitme
name: fitme
namespace: default
resourceVersion: "525571"
selfLink: /apis/apps/v1/namespaces/default/deployments/drogo
uid: 865a5a8a-1508-11ea-823c-42010a8e0047
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: drogo
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: fitme
spec:
containers:
- image: gcr.io/fitme-226818/drogo@sha256:ab49a4b12e7a14f9428a5720bbfd1808eb9667855cb874e973c386a4e9b59d40
imagePullPolicy: IfNotPresent
name: fitme-sha256
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2019-12-02T13:34:57Z"
lastUpdateTime: "2019-12-02T13:34:57Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2019-12-02T13:34:54Z"
lastUpdateTime: "2019-12-02T13:34:57Z"
message: ReplicaSet "drogo-5c7f449668" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 2
readyReplicas: 1
replicas: 1
updatedReplicas: 1
的IP地址放入
来访问fitme-jr29g
10.35.240.95:9000
容器。
答案 0 :(得分:0)
deployment
对象可以,并且经常是should have network properties to expose the applications within the pods。
Pod是具有网络功能的对象,virtual ethernet interfaces是接收传入流量所必需的。
另一方面,services
是完全面向网络的对象,主要用于将网络流量中继到Pod。
您可以将其视为Pod(在部署中分组)作为后端,将服务视为负载平衡器。最后,两者都需要网络功能。
在您的情况下,我不确定您如何通过load balancer
公开部署,因为它的Pod似乎没有任何开放端口。
由于公开pod的服务的目标端口是9000,因此您可以将其添加到部署中的pod模板中:
spec:
containers:
- image: gcr.io/fitme-xxxxxxx
name: fitme-sha256
ports:
- containerPort: 9000
确保它与您的容器实际接收传入请求的端口相匹配。