我一直在打开和关闭这堵墙一段时间。网上有大量关于Kubernetes的信息,但是所有人都知道像我这样的n00b并不是真的有很多东西可以继续下去。
那么,任何人都可以共享以下(作为yaml文件)的简单示例吗?我想要的只是
然后是从后面到前面调用api调用的示例。
我开始研究这种事情,突然间我点击了这个页面 - https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this。这是超级无益的。我不想要或不需要高级网络策略,也没有时间浏览映射在kubernetes之上的几个不同的服务层。我只是想弄清楚网络请求的一个简单例子。
希望如果此示例存在于stackoverflow上,它也将为其他人服务。
任何帮助将不胜感激。感谢。
编辑; 看起来最简单的示例可能是使用Ingress控制器。
编辑编辑;
我正努力尝试部署一个最小的示例 - 我将在此处完成一些步骤并指出我的问题。
以下是我的yaml
文件:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: nginx
image: patientplatypus/frontend_example
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: nginx
image: patientplatypus/backend_example
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: LoadBalancer
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 5000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: www.kubeplaytime.example
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
- path: /api
backend:
serviceName: backend
servicePort: 80
我相信这是在做什么
部署前端和后端应用 - 我将patientplatypus/frontend_example
和patientplatypus/backend_example
部署到dockerhub,然后将图像拉下来。我有一个悬而未决的问题是,如果我不想从docker hub中提取图像而只是想从我的localhost加载,那该怎么可能呢?在这种情况下,我会将我的代码推送到生产服务器,在服务器上构建docker镜像,然后上传到kubernetes。如果我希望我的图像是私密的,我不必依赖dockerhub。
它正在创建两个服务端点,用于将外部流量从Web浏览器路由到每个部署。这些服务属于loadBalancer
类型,因为它们正在平衡我在部署中的(在这种情况下为3个)复制集之间的流量。
最后,我有一个假设的入口控制器,允许我的服务通过www.kubeplaytime.example
和www.kubeplaytime.example/api
相互路由。但是这不起作用。
运行时会发生什么?
patientplatypus:~/Documents/kubePlay:09:17:50$kubectl create -f kube-deploy.yaml
deployment.apps "frontend" created
service "frontend" created
deployment.apps "backend" created
service "backend" created
ingress.extensions "frontend" created
首先,它似乎创建了我需要的所有部分,没有任何错误。
patientplatypus:~/Documents/kubePlay:09:22:30$kubectl get --watch services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend LoadBalancer 10.0.18.174 <pending> 80:31649/TCP 1m
frontend LoadBalancer 10.0.100.65 <pending> 80:32635/TCP 1m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10d
frontend LoadBalancer 10.0.100.65 138.91.126.178 80:32635/TCP 2m
backend LoadBalancer 10.0.18.174 138.91.121.182 80:31649/TCP 2m
其次,如果我观看服务,我最终会获得可用于在浏览器中导航到这些网站的IP地址。上述每个IP地址都可以分别将我路由到前端和后端。
无论其
当我尝试使用入口控制器时,我遇到了一个问题 - 它看似已部署,但我不知道如何到达那里。
patientplatypus:~/Documents/kubePlay:09:24:44$kubectl get ingresses
NAME HOSTS ADDRESS PORTS AGE
frontend www.kubeplaytime.example 80 16m
www.kubeplaytime.example
似乎无效。我必须要做的就是路由到我刚刚创建的入口扩展,是为了获取IP地址,在 it 上使用服务和部署,但这开始看起来令人难以置信非常复杂。
例如,请看一下这篇中篇文章:https://medium.com/@cashisclay/kubernetes-ingress-82aa960f658e。
似乎只为服务路由添加到Ingress的必要代码(即他称之为 Ingress Controller )似乎是这样的:
---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: ingress-nginx
spec:
replicas: 1
template:
metadata:
labels:
app: ingress-nginx
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
name: ingress-nginx
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
---
kind: Service
apiVersion: v1
metadata:
name: nginx-default-backend
spec:
ports:
- port: 80
targetPort: http
selector:
app: nginx-default-backend
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nginx-default-backend
spec:
replicas: 1
template:
metadata:
labels:
app: nginx-default-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
ports:
- name: http
containerPort: 8080
protocol: TCP
这似乎需要附加到我上面的其他yaml
代码中,以便为我的入口路由获取服务入口点,它似乎确实提供了一个ip:
patientplatypus:~/Documents/kubePlay:09:54:12$kubectl get --watch services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend LoadBalancer 10.0.31.209 <pending> 80:32428/TCP 4m
frontend LoadBalancer 10.0.222.47 <pending> 80:32482/TCP 4m
ingress-nginx LoadBalancer 10.0.28.157 <pending> 80:30573/TCP,443:30802/TCP 4m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10d
nginx-default-backend ClusterIP 10.0.71.121 <none> 80/TCP 4m
frontend LoadBalancer 10.0.222.47 40.121.7.66 80:32482/TCP 5m
ingress-nginx LoadBalancer 10.0.28.157 40.121.6.179 80:30573/TCP,443:30802/TCP 6m
backend LoadBalancer 10.0.31.209 40.117.248.73 80:32428/TCP 7m
所以ingress-nginx
似乎是我想去的网站。导航到40.121.6.179
会返回默认的404消息(default backend - 404
) - 它不会转到frontend
,因为/
要路由。 /api
返回相同的内容。导航到我的主机名称空间www.kubeplaytime.example
会从浏览器返回404 - 无错误处理。
问题
Ingress控制器是否严格必要,如果有,那么这个版本的复杂程度是否较低?
我觉得我很亲密,我做错了什么?
完整的YAML
此处可用:https://gist.github.com/patientplatypus/fa07648339ee6538616cb69282a84938
感谢您的帮助!
编辑编辑
我试图使用 HELM 。从表面上看,它似乎是一个简单的界面,所以我尝试将它旋转起来:
patientplatypus:~/Documents/kubePlay:12:13:00$helm install stable/nginx-ingress
NAME: erstwhile-beetle
LAST DEPLOYED: Sun May 6 12:13:30 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
erstwhile-beetle-nginx-ingress-controller 1 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
erstwhile-beetle-nginx-ingress-controller LoadBalancer 10.0.216.38 <pending> 80:31494/TCP,443:32118/TCP 1s
erstwhile-beetle-nginx-ingress-default-backend ClusterIP 10.0.55.224 <none> 80/TCP 1s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
erstwhile-beetle-nginx-ingress-controller 1 1 1 0 1s
erstwhile-beetle-nginx-ingress-default-backend 1 1 1 0 1s
==> v1beta1/PodDisruptionBudget
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
erstwhile-beetle-nginx-ingress-controller 1 N/A 0 1s
erstwhile-beetle-nginx-ingress-default-backend 1 N/A 0 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
erstwhile-beetle-nginx-ingress-controller-7df9b78b64-24hwz 0/1 ContainerCreating 0 1s
erstwhile-beetle-nginx-ingress-default-backend-849b8df477-gzv8w 0/1 ContainerCreating 0 1s
NOTES:
The nginx-ingress controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w erstwhile-beetle-nginx-ingress-controller'
An example Ingress that makes use of the controller:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
namespace: foo
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: exampleService
servicePort: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
看起来这非常好 - 它可以解决所有问题并提供一个如何添加入口的示例。自从我在空白kubectl
中旋转掌舵后,我使用了以下yaml
文件来添加我认为需要的内容。
文件:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: nginx
image: patientplatypus/frontend_example
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: nginx
image: patientplatypus/backend_example
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: LoadBalancer
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 5000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: www.example.com
http:
paths:
- path: /api
backend:
serviceName: backend
servicePort: 80
- path: /
frontend:
serviceName: frontend
servicePort: 80
将此部署到群集会遇到此错误:
patientplatypus:~/Documents/kubePlay:11:44:20$kubectl create -f kube-deploy.yaml
deployment.apps "frontend" created
service "frontend" created
deployment.apps "backend" created
service "backend" created
error: error validating "kube-deploy.yaml": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[1]): unknown field "frontend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath, ValidationError(Ingress.spec.rules[0].http.paths[1]): missing required field "backend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath]; if you choose to ignore these errors, turn validation off with --validate=false
那么,问题就变成了,如何调试这个问题? 如果你吐出了舵所产生的代码,它基本上是不可读的 - 那里没有办法去找那些正在发生的事情。
检查出来:https://gist.github.com/patientplatypus/0e281bf61307f02e16e0091397a1d863 - 超过1000行!
如果有人有更好的方法来调试helm deploy,请将其添加到打开的问题列表中。
编辑编辑编辑
为了简化的极端,我尝试仅使用命名空间从一个pod调用另一个pod。
所以这是我的React代码,我在那里发出http请求:
axios.get('http://backend/test')
.then(response=>{
console.log('return from backend and response: ', response);
})
.catch(error=>{
console.log('return from backend and error: ', error);
})
我还试图在没有运气的情况下使用http://backend.exampledeploy.svc.cluster.local/test
。
这是处理get:
的节点代码router.get('/test', function(req, res, next) {
res.json({"test":"test"})
});
以下是我上传到yaml
群集的kubectl
文件:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
namespace: exampledeploy
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: nginx
image: patientplatypus/frontend_example
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: exampledeploy
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
namespace: exampledeploy
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: nginx
image: patientplatypus/backend_example
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: exampledeploy
spec:
type: LoadBalancer
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 5000
上传到群集似乎可以在我的终端中看到:
patientplatypus:~/Documents/kubePlay:14:33:20$kubectl get all --namespace=exampledeploy
NAME READY STATUS RESTARTS AGE
pod/backend-584c5c59bc-5wkb4 1/1 Running 0 15m
pod/backend-584c5c59bc-jsr4m 1/1 Running 0 15m
pod/backend-584c5c59bc-txgw5 1/1 Running 0 15m
pod/frontend-647c99cdcf-2mmvn 1/1 Running 0 15m
pod/frontend-647c99cdcf-79sq5 1/1 Running 0 15m
pod/frontend-647c99cdcf-r5bvg 1/1 Running 0 15m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/backend LoadBalancer 10.0.112.160 168.62.175.155 80:31498/TCP 15m
service/frontend LoadBalancer 10.0.246.212 168.62.37.100 80:31139/TCP 15m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/backend 3 3 3 3 15m
deployment.extensions/frontend 3 3 3 3 15m
NAME DESIRED CURRENT READY AGE
replicaset.extensions/backend-584c5c59bc 3 3 3 15m
replicaset.extensions/frontend-647c99cdcf 3 3 3 15m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/backend 3 3 3 3 15m
deployment.apps/frontend 3 3 3 3 15m
NAME DESIRED CURRENT READY AGE
replicaset.apps/backend-584c5c59bc 3 3 3 15m
replicaset.apps/frontend-647c99cdcf 3 3 3 15m
但是,当我尝试发出请求时,我收到以下错误:
return from backend and error:
Error: Network Error
Stack trace:
createError@http://168.62.37.100/static/js/bundle.js:1555:15
handleError@http://168.62.37.100/static/js/bundle.js:1091:14
App.js:14
由于axios
来自浏览器的调用,我想知道是否根本无法使用此方法来调用后端,即使后端和前端位于不同的pod中。我有点失落,因为我认为这是将pod连接在一起的最简单方法。
编辑X5
我已经确定可以通过exec进入pod这样从命令行卷曲后端:
patientplatypus:~/Documents/kubePlay:15:25:25$kubectl exec -ti frontend-647c99cdcf-5mfz4 --namespace=exampledeploy -- curl -v http://backend/test
* Hostname was NOT found in DNS cache
* Trying 10.0.249.147...
* Connected to backend (10.0.249.147) port 80 (#0)
> GET /test HTTP/1.1
> User-Agent: curl/7.38.0
> Host: backend
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Content-Type: application/json; charset=utf-8
< Content-Length: 15
< ETag: W/"f-SzkCEKs7NV6rxiz4/VbpzPnLKEM"
< Date: Sun, 06 May 2018 20:25:49 GMT
< Connection: keep-alive
<
* Connection #0 to host backend left intact
{"test":"test"}
这意味着,毫无疑问,因为前端代码正在浏览器中执行,它需要Ingress才能进入pod,因为来自前端的http请求正在打破简单的pod联网。我不确定这一点,但这意味着Ingress是必要的。
答案 0 :(得分:4)
首先,让我们澄清一些明显的误解。您提到您的前端是React应用程序,可能会在用户浏览器中运行。为此,您的实际问题不是您的后端和前端广告相互沟通,但浏览器需要能够连接到这两个广告连播(到前端pod以加载React应用程序,以及到React应用程序的后端pod以进行API调用)。
要想象:
+---------+
+---| Browser |---+
| +---------+ |
V V
+-----------+ +----------+ +-----------+ +----------+
| Front-end |---->| Back-end | | Front-end | | Back-end |
+-----------+ +----------+ +-----------+ +----------+
(what you asked for) (what you need)
如前所述,最简单的解决方案是使用Ingress controller。我不会详细介绍如何在这里设置Ingress控制器;在某些云环境(如GKE)中,您将能够使用云提供商为您提供的Ingress控制器。否则,您可以设置NGINX Ingress controller。有关更多信息,请查看NGINX Ingress控制器deployment guide。
首先为前端和后端应用程序定义Service resources(这些也允许您的Pod相互通信)。服务定义可能如下所示:
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 8080
确保您的Pod具有可由服务资源选择的labels(在此示例中,我使用app=backend
和app=frontend
作为标签)。
如果你想建立Pod-to-Pod通信,你现在就完成了。在每个Pod中,您现在可以使用backend.<namespace>.svc.cluster.local
(或backend
作为速记)和frontend
作为主机名来连接到该Pod。
接下来,您可以定义Ingress资源;由于两个服务都需要从集群外部(用户浏览器)进行连接,因此您需要两种服务的Ingress定义。
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: www.your-application.example
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: backend
spec:
rules:
- host: api.your-application.example
http:
paths:
- path: /
backend:
serviceName: backend
servicePort: 80
或者,您也可以使用单个Ingress资源聚合前端和后端(此处没有“正确”答案,只是偏好):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: www.your-application.example
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
- path: /api
backend:
serviceName: backend
servicePort: 80
之后,请确保www.your-application.example
和api.your-application.example
都指向您的Ingress控制器的外部IP地址,您应该完成。
答案 1 :(得分:3)
事实证明我过于复杂。这是Kubernetes文件,可以满足我的需求。您可以使用两个部署(前端和后端)和一个服务入口点来完成此操作。据我所知,服务可以对许多(不仅仅是2个)不同的部署进行负载平衡,这意味着对于实际开发而言,这应该是微服务开发的良好开端。入口方法的一个好处是允许使用路径名而不是端口号,但考虑到难度,它在开发中似乎不实用。
以下是yaml
文件:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: frontend
labels:
app: exampleapp
spec:
replicas: 3
selector:
matchLabels:
app: exampleapp
template:
metadata:
labels:
app: exampleapp
spec:
containers:
- name: nginx
image: patientplatypus/kubeplayfrontend
ports:
- containerPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: exampleapp
spec:
replicas: 3
selector:
matchLabels:
app: exampleapp
template:
metadata:
labels:
app: exampleapp
spec:
containers:
- name: nginx
image: patientplatypus/kubeplaybackend
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: entrypt
spec:
type: LoadBalancer
ports:
- name: backend
port: 8080
targetPort: 5000
- name: frontend
port: 81
targetPort: 3000
selector:
app: exampleapp
以下是我用来启动它的bash命令(你可能需要添加一个登录命令 - docker login
- 来推送到dockerhub):
#!/bin/bash
# stop all containers
echo stopping all containers
docker stop $(docker ps -aq)
# remove all containers
echo removing all containers
docker rm $(docker ps -aq)
# remove all images
echo removing all images
docker rmi $(docker images -q)
echo building backend
cd ./backend
docker build -t patientplatypus/kubeplaybackend .
echo push backend to dockerhub
docker push patientplatypus/kubeplaybackend:latest
echo building frontend
cd ../frontend
docker build -t patientplatypus/kubeplayfrontend .
echo push backend to dockerhub
docker push patientplatypus/kubeplayfrontend:latest
echo now working on kubectl
cd ..
echo deleting previous variables
kubectl delete pods,deployments,services entrypt backend frontend
echo creating deployment
kubectl create -f kube-deploy.yaml
echo watching services spin up
kubectl get services --watch
实际代码只是一个前端反应应用程序,在启动应用程序页面的componentDidMount
上对后端节点路由进行axios http调用。
您还可以在此处查看一个有效的示例:https://github.com/patientplatypus/KubernetesMultiPodCommunication
再次感谢大家的帮助。
答案 2 :(得分:0)
要使用入口控制器,您需要拥有有效的域(配置DNS服务器以指向您的入口控制器ip)。这不是由于任何kubernetes“魔术”,而是由于vhosts的工作方式(here是nginx的一个例子 - 经常被用作入口服务器,但任何其他入口实现将在引擎盖下以相同的方式工作)。
如果您无法配置域,则最简单的开发方式是创建kubernetes服务。使用kubectl expose
kubectl expose pod frontend-pod --port=444 --name=frontend
kubectl expose pod backend-pod --port=888 --name=backend