我正在尝试使用rancher在2个运行centos 7的裸机服务器上建立K8s集群。 我使用rancher UI创建了集群,然后添加了2个节点: -具有etcd,controlplane和worker角色的服务器1 -服务器2具有控制面板和辅助角色
一切正常。 然后,我尝试使用rancher教程部署rancher / hello-world映像,并在端口80中配置入口。
如果pod在服务器1上运行,则可以使用server1.xio ip地址轻松访问。因为服务器1 ip是群集的入口。 当它在Pod 2上运行时,它显示Nginx的504网关错误。
打开所有端口后,我已经禁用了firewalld。
我注意到2个kubernates服务记录了一些错误:
法兰绒:
E0429 14:20:13.625489 1 route_network.go:114] Error adding route to 10.42.0.0/24 via 192.168.169.46 dev index 2: network is unreachable
I0429 14:20:13.626679 1 iptables.go:115] Some iptables rules are missing; deleting and recreating rules
I0429 14:20:13.626689 1 iptables.go:137] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0429 14:20:13.626934 1 iptables.go:115] Some iptables rules are missing; deleting and recreating rules
I0429 14:20:13.626943 1 iptables.go:137] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0429 14:20:13.627279 1 iptables.go:137] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
I0429 14:20:13.627568 1 iptables.go:137] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT
I0429 14:20:13.627849 1 iptables.go:137] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.4.0/24 -j RETURN
I0429 14:20:13.628111 1 iptables.go:125] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0429 14:20:13.628551 1 iptables.go:137] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE
I0429 14:20:13.629139 1 iptables.go:125] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0429 14:20:13.629356 1 iptables.go:125] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT
I0429 14:20:13.630313 1 iptables.go:125] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
I0429 14:20:13.631531 1 iptables.go:125] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.4.0/24 -j RETURN
I0429 14:20:13.632717 1 iptables.go:125] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE```
cattle agent thrown:
Timout连接到代理” url =“ wss://ljanalyticsdev01.lojackhq.com.ar:16443 / v3 / connect”```
但是在节点担当控制面板角色时这是固定的。
Hello World pod YAML:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
field.cattle.io/creatorId: user-qlsc5
field.cattle.io/publicEndpoints: '[{"addresses":["192.168.169.46"],"port":80,"protocol":"HTTP","serviceName":"default:ingress-d1e1a394f61c108633c4bd37aedde757","ingressName":"default:hello","hostname":"hello.default.192.168.169.46.xip.io","allNodes":true}]'
creationTimestamp: "2019-04-29T03:55:16Z"
generation: 6
labels:
cattle.io/creator: norman
workload.user.cattle.io/workloadselector: deployment-default-hello
name: hello
namespace: default
resourceVersion: "303493"
selfLink: /apis/apps/v1beta2/namespaces/default/deployments/hello
uid: 992bf62e-6a32-11e9-92ae-005056998e1d
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
workload.user.cattle.io/workloadselector: deployment-default-hello
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
annotations:
cattle.io/timestamp: "2019-04-29T03:54:58Z"
creationTimestamp: null
labels:
workload.user.cattle.io/workloadselector: deployment-default-hello
spec:
containers:
- image: rancher/hello-world
imagePullPolicy: Always
name: hello
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities: {}
privileged: false
procMount: Default
readOnlyRootFilesystem: false
runAsNonRoot: false
stdin: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
tty: true
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2019-04-29T03:55:16Z"
lastUpdateTime: "2019-04-29T03:55:36Z"
message: ReplicaSet "hello-6cc7bc6644" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2019-04-29T13:22:35Z"
lastUpdateTime: "2019-04-29T13:22:35Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 6
readyReplicas: 1
replicas: 1
updatedReplicas: 1
负载均衡器和入口YAML:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
field.cattle.io/creatorId: user-qlsc5
field.cattle.io/ingressState: '{"aGVsbG8vZGVmYXVsdC94aXAuaW8vLzgw":"deployment:default:hello"}'
field.cattle.io/publicEndpoints: '[{"addresses":["192.168.169.46"],"port":80,"protocol":"HTTP","serviceName":"default:ingress-d1e1a394f61c108633c4bd37aedde757","ingressName":"default:hello","hostname":"hello.default.192.168.169.46.xip.io","allNodes":true}]'
creationTimestamp: "2019-04-27T03:51:08Z"
generation: 2
labels:
cattle.io/creator: norman
name: hello
namespace: default
resourceVersion: "303476"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/hello
uid: b082e994-689f-11e9-92ae-005056998e1d
spec:
rules:
- host: hello.default.192.168.169.46.xip.io
http:
paths:
- backend:
serviceName: ingress-d1e1a394f61c108633c4bd37aedde757
servicePort: 80
status:
loadBalancer:
ingress:
- ip: 192.168.169.46
- ip: 192.168.186.211
答案 0 :(得分:1)
您的入口控制器是否在另一个节点上运行?我可能会在两个节点上重新启动您的docker服务,看看是否会刷新任何旧路由