我正在玩kubernetes并且我已经通过4次部署设置了我的环境:
对于hello
,auth
和nodehello
部署,我设置了每项内部服务。
对于frontend
部署,我设置了一个负载平衡器服务,该服务将暴露给外部世界。它使用配置映射nginx-frontend-conf
重定向到相应的pod,并具有以下内容:
upstream hello {
server hello.default.svc.cluster.local;
}
upstream auth {
server auth.default.svc.cluster.local;
}
upstream nodehello {
server nodehello.default.svc.cluster.local;
}
server {
listen 443;
ssl on;
ssl_certificate /etc/tls/cert.pem;
ssl_certificate_key /etc/tls/key.pem;
location / {
proxy_pass http://hello;
}
location /login {
proxy_pass http://auth;
}
location /nodehello {
proxy_pass http://nodehello;
}
}
使用curl -k https://<frontend-external-ip>
调用前端端点时,我会被路由到可用的hello
pod,这是预期的行为。
在致电https://<frontend-external-ip>/nodehello
时,我不会被路由到nodehello
广告连播,而是再次转到hellopod
。
我怀疑upstream nodehello
配置是失败的部分。我不确定服务发现在这里是如何工作的,即如何公开dns名称nodehello.default.svc.cluster.local
。我很欣赏它的工作原理以及我做错了什么。
使用的yaml文件
部署/ hello.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello
spec:
replicas: 3
template:
metadata:
labels:
app: hello
track: stable
spec:
containers:
- name: hello
image: "udacity/example-hello:1.0.0"
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
resources:
limits:
cpu: 0.2
memory: "10Mi"
livenessProbe:
httpGet:
path: /healthz
port: 81
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /readiness
port: 81
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
部署/ auth.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth
spec:
replicas: 1
template:
metadata:
labels:
app: auth
track: stable
spec:
containers:
- name: auth
image: "udacity/example-auth:1.0.0"
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
resources:
limits:
cpu: 0.2
memory: "10Mi"
livenessProbe:
httpGet:
path: /healthz
port: 81
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /readiness
port: 81
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
部署/ frontend.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
template:
metadata:
labels:
app: frontend
track: stable
spec:
containers:
- name: nginx
image: "nginx:1.9.14"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
volumeMounts:
- name: "nginx-frontend-conf"
mountPath: "/etc/nginx/conf.d"
- name: "tls-certs"
mountPath: "/etc/tls"
volumes:
- name: "tls-certs"
secret:
secretName: "tls-certs"
- name: "nginx-frontend-conf"
configMap:
name: "nginx-frontend-conf"
items:
- key: "frontend.conf"
path: "frontend.conf"
部署/ nodehello.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodehello
spec:
replicas: 1
template:
metadata:
labels:
app: nodehello
track: stable
spec:
containers:
- name: nodehello
image: "thezebra/nodehello:0.0.2"
ports:
- name: http
containerPort: 80
resources:
limits:
cpu: 0.2
memory: "10Mi"
服务/ hello.yaml
kind: Service
apiVersion: v1
metadata:
name: "hello"
spec:
selector:
app: "hello"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
服务/ auth.yaml
kind: Service
apiVersion: v1
metadata:
name: "auth"
spec:
selector:
app: "auth"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
服务/ frontend.yaml
kind: Service
apiVersion: v1
metadata:
name: "frontend"
spec:
selector:
app: "frontend"
ports:
- protocol: "TCP"
port: 443
targetPort: 443
type: LoadBalancer
服务/ nodehello.yaml
kind: Service
apiVersion: v1
metadata:
name: "nodehello"
spec:
selector:
app: "nodehello"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
答案 0 :(得分:0)
完美无缺: - )
$ curl -s http://frontend/
{"message":"Hello"}
$ curl -s http://frontend/login
authorization failed
$ curl -s http://frontend/nodehello
Hello World!
我怀疑你在添加/ nodehello但未重新启动nginx时可能已经更新了nginx-frontend-conf。 Pods不会自动更改ConfigMaps。尝试:
kubectl delete pod -l app=frontend
在versioned ConfigMaps发生之前,没有更好的解决方案。