我们想测试kubernetes负载平衡。因此,我们创建了一个2节点集群,它运行我们容器的6个副本。 Container已运行apache2服务器和php,如果我们浏览hostname.php
,它将打印pod名称群集详细信息: 172.16.2.92 - 主人和奴才 172.16.2.91 - minion
RC和服务详情:
前端-controller.json:
{
"kind":"ReplicationController",
"apiVersion":"v1beta3",
"metadata":{
"name":"frontend",
"labels":{
"name":"frontend"
}
},
"spec":{
"replicas":6,
"selector":{
"name":"frontend"
},
"template":{
"metadata":{
"labels":{
"name":"frontend"
}
},
"spec":{
"containers":[
{
"name":"php-hostname",
"image":"naresht/hostname",
"ports":[
{
"containerPort":80,
"protocol":"TCP"
}
]
}
]
}
}
}
}
前端-service.json:
{
"kind":"Service",
"apiVersion":"v1beta3",
"metadata":{
"name":"frontend",
"labels":{
"name":"frontend"
}
},
"spec":{
"createExternalLoadBalancer": true,
"ports": [
{
"port":3000,
"targetPort":80,
"protocol":"TCP"
}
],
"publicIPs": [ "172.16.2.92"],
"selector":{
"name":"frontend"
}
}
}
Pod详细信息: frontend-01bb8,frontend-svxfl和frontend-yki5s在节点172.16.2.91上运行 frontend-65ykz,frontend-c1x0d和frontend-y925t正在节点172.16.2.92上运行
如果我们浏览172.16.2.92:3000/hostname.php,它会输出POD名称。
问题:
在节点172.16.2.92上运行监视-n1 curl 172.16.2.92:3000/hostname.php只提供那些pod(frontend-65ykz,frontend-c1x0d和frontend-y925t)。他们没有显示其他节点172.16.2.91 pods。 在节点172.16.2.91上运行相同的命令仅提供该pod。他们没有显示其他节点172.16.2.92 pods。 在群集外运行相同的命令,仅显示172.16.2.92 pod。 但是,如果我们在任何地方运行,我们希望看到所有pod都不是特定的节点pod。
查看以下详细信息以获取更多信息,并在出现任何问题时为您提供帮助
#kubectl get nodes
NAME LABELS STATUS
172.16.2.91 kubernetes.io/hostname=172.16.2.91 Ready
172.16.2.92 kubernetes.io/hostname=172.16.2.92 Ready
#kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
frontend-01bb8 172.17.0.84 172.16.2.91/172.16.2.91 name=frontend Running About a minute
php-hostname naresht/hostname Running About a minute
frontend-65ykz 10.1.64.79 172.16.2.92/172.16.2.92 name=frontend Running About a minute
php-hostname naresht/hostname Running About a minute
frontend-c1x0d 10.1.64.77 172.16.2.92/172.16.2.92 name=frontend Running About a minute
php-hostname naresht/hostname Running About a minute
frontend-svxfl 172.17.0.82 172.16.2.91/172.16.2.91 name=frontend Running About a minute
php-hostname naresht/hostname Running About a minute
frontend-y925t 10.1.64.78 172.16.2.92/172.16.2.92 name=frontend Running About a minute
php-hostname naresht/hostname Running About a minute
frontend-yki5s 172.17.0.83 172.16.2.91/172.16.2.91 name=frontend Running About a minute
php-hostname naresht/hostname Running About a minute
kube-dns-sbgma 10.1.64.11 172.16.2.92/172.16.2.92 k8s-app=kube-dns,kubernetes.io/cluster-service=true,name=kube-dns Running 45 hours
kube2sky gcr.io/google_containers/kube2sky:1.1 Running 45 hours
etcd quay.io/coreos/etcd:v2.0.3 Running 45 hours
skydns gcr.io/google_containers/skydns:2015-03-11-001 Running 45 hours
#kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
frontend name=frontend name=frontend 192.168.3.184 3000/TCP
kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,name=kube-dns k8s-app=kube-dns 192.168.3.10 53/UDP
kubernetes component=apiserver,provider=kubernetes <none> 192.168.3.2 443/TCP
kubernetes-ro component=apiserver,provider=kubernetes <none> 192.168.3.1 80/TCP
#iptables -t nat -L
Chain KUBE-PORTALS-CONTAINER (1 references)
target prot opt source destination
REDIRECT tcp -- anywhere 192.168.3.184 /* default/frontend: */ tcp dpt:3000 redir ports 50734
REDIRECT tcp -- anywhere kube02 /* default/frontend: */ tcp dpt:3000 redir ports 50734
REDIRECT udp -- anywhere 192.168.3.10 /* default/kube-dns: */ udp dpt:domain redir ports 52415
REDIRECT tcp -- anywhere 192.168.3.2 /* default/kubernetes: */ tcp dpt:https redir ports 33373
REDIRECT tcp -- anywhere 192.168.3.1 /* default/kubernetes-ro: */ tcp dpt:http redir ports 60311
Chain KUBE-PORTALS-HOST (1 references)
target prot opt source destination
DNAT tcp -- anywhere 192.168.3.184 /* default/frontend: */ tcp dpt:3000 to:172.16.2.92:50734
DNAT tcp -- anywhere kube02 /* default/frontend: */ tcp dpt:3000 to:172.16.2.92:50734
DNAT udp -- anywhere 192.168.3.10 /* default/kube-dns: */ udp dpt:domain to:172.16.2.92:52415
DNAT tcp -- anywhere 192.168.3.2 /* default/kubernetes: */ tcp dpt:https to:172.16.2.92:33373
DNAT tcp -- anywhere 192.168.3.1 /* default/kubernetes-ro: */ tcp dpt:http to:172.16.2.92:60311
由于
答案 0 :(得分:1)
因为法兰绒不能正常工作所以
每个节目上的/root/kube/reconfDocker.sh
它将重新启动docker和flannel然后检查ifconfig docker0和flannel0网桥IP应该在同一个网络中。然后负载平衡将起作用。它对我有用。
答案 1 :(得分:0)
在我看来,问题出在网络配置上。 主机172.16.2.91中的Pod具有IP地址172.17.0.xx,可以从其他主机访问,即172.16.2.92
如果ping失败,请根据kubernetes要求检查您的网络: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/networking.md Kubernetes对任何网络实施都强加了以下基本要求(除非有任何有意的网络分段策略): •所有容器都可以与所有其他容器通信而无需NAT •所有节点都可以在没有NAT的情况下与所有容器通信(反之亦然) •容器看到的IP与其他人认为的IP相同