kubectl日志-f获取“授权错误”

时间:2019-02-07 11:52:53

标签: kubernetes authorization amazon-eks

我最近使用eksctl在EKS上创建了一个集群。 kubectl logs -f mypod-0遇到授权错误:

Error from server (InternalError): Internal error occurred: Authorization error (user=kube-apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy) 任何建议和见解都将受到赞赏

4 个答案:

答案 0 :(得分:1)

您需要创建一个ClusterRoleBinding,其角色指向用户:kube-apiserver-kubelet-client

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubelet-api-admin
subjects:
- kind: User
  name: kube-apiserver-kubelet-client
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: system:kubelet-api-admin
  apiGroup: rbac.authorization.k8s.io

kubelet-api-admin通常是具有必要权限的角色,但是您可以将其替换为apt角色。

答案 1 :(得分:0)

我遇到了同样的问题,但是我安装了角色:

$ kubectl get clusterrolebindings.rbac.authorization.k8s.io --all-namespaces -o yaml |grep -B 27 kube-apiserver-kubelet-client
- apiVersion: rbac.authorization.k8s.io/v1
  kind: ClusterRoleBinding
  metadata:
    creationTimestamp: "2020-12-07T16:50:37Z"
    managedFields:
    - apiVersion: rbac.authorization.k8s.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:roleRef:
          f:apiGroup: {}
          f:kind: {}
          f:name: {}
        f:subjects: {}
      manager: kubectl
      operation: Update
      time: "2020-12-07T16:50:37Z"
    name: kubelet-api-admin
    resourceVersion: "3324665"
    selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kubelet-api-admin
    uid: 593ee218-6e21-4737-a5a6-c39b2976b94c
  roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: system:kubelet-api-admin
  subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kube-apiserver-kubelet-client

它只会偶尔发生

答案 2 :(得分:0)

如果您的 aws-auth 配置映射损坏/为空,则可能会发生这种情况。例如,如果您并行运行多个 eksctl 操作,则可能会发生这种情况。

答案 3 :(得分:0)

在 prem 集群上,我遇到了一个问题,我更改了主服务器的 DNS 地址。您需要更改每个节点上 /etc/kubernetes/kubelet.conf 中的 dns 名称,然后 sudo systemctl restart kublet.service.