我使用 kubeadm 在 AWS 上启动集群。我可以使用 kubectl 在 AWS 上成功创建负载均衡器,但是该负载均衡器未在任何EC2实例中注册。这会导致无法从公共访问服务的问题。
根据观察,创建ELB时,它无法在所有子网下找到任何正常的实例。我很确定我可以正确标记所有实例。
已更新:我正在从 k8s-controller-manager 中读取日志,它显示我的节点未设置ProviderID。并且根据Github注释,ELB将忽略无法从提供者确定实例ID的节点。这会引起问题吗?我应该如何设置providerID?
apiVersion: v1
kind: Service
metadata:
name: load-balancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "elb"
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
selector:
app: replica
type: LoadBalancer
apiVersion: apps/v1
kind: Deployment
metadata:
name: replica-deployment
labels:
app: replica
spec:
replicas: 1
selector:
matchLabels:
app: replica
template:
metadata:
labels:
app: replica
spec:
containers:
- name: web
image: web
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
- containerPort: 443
command: ["/bin/bash"]
args: ["-c", "script_to_start_server.sh"]
status
部分status:
addresses:
- address: 172.31.35.209
type: InternalIP
- address: k8s
type: Hostname
allocatable:
cpu: "4"
ephemeral-storage: "119850776788"
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 16328856Ki
pods: "110"
capacity:
cpu: "4"
ephemeral-storage: 130046416Ki
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 16431256Ki
pods: "110"
conditions:
- lastHeartbeatTime: 2018-07-12T04:01:54Z
lastTransitionTime: 2018-07-11T15:45:06Z
message: kubelet has sufficient disk space available
reason: KubeletHasSufficientDisk
status: "False"
type: OutOfDisk
- lastHeartbeatTime: 2018-07-12T04:01:54Z
lastTransitionTime: 2018-07-11T15:45:06Z
message: kubelet has sufficient memory available
reason: KubeletHasSufficientMemory
status: "False"
type: MemoryPressure
- lastHeartbeatTime: 2018-07-12T04:01:54Z
lastTransitionTime: 2018-07-11T15:45:06Z
message: kubelet has no disk pressure
reason: KubeletHasNoDiskPressure
status: "False"
type: DiskPressure
- lastHeartbeatTime: 2018-07-12T04:01:54Z
lastTransitionTime: 2018-07-11T15:45:06Z
message: kubelet has sufficient PID available
reason: KubeletHasSufficientPID
status: "False"
type: PIDPressure
- lastHeartbeatTime: 2018-07-12T04:01:54Z
lastTransitionTime: 2018-07-11T15:45:06Z
message: kubelet is posting ready status. AppArmor enabled
reason: KubeletReady
status: "True"
type: Ready
如何解决此问题?
谢谢!
答案 0 :(得分:2)
在我的情况下,问题在于工作节点未正确分配providerId。
我设法对节点进行了修补-kubectl修补节点ip-xxxxx.ap-southeast-2.compute.internal -p'{“ spec”:{“ providerID”:“ aws:/// ap-southeast- 2a / i-0xxxxx“}}'
添加ProviderID。然后,当我部署该服务时。 ELB已创建。节点组已添加,并自始至终起作用。这不是一个简单的答案。但是,直到我找到更好的解决方案,让我们留在这里
答案 1 :(得分:1)
就我而言-问题缺少选项--cloud-provider=aws
将以下内容放在/ etc / default / kubelet中(在我的情况下通过terraform)并重新部署了节点,一切都正常了
/ etc / default / kubelet
KUBELET_EXTRA_ARGS='--cloud-provider=aws'