普罗米修斯9.5.4获得头盔状态失败

时间:2019-12-18 10:35:39

标签: amazon-web-services kubernetes prometheus kubernetes-helm hyper-v

我正在关注,但无法理解和解决部署状态失败的原因。 https://linuxacademy.com/cp/courses/lesson/course/2205/lesson/2/module/218

x[i]

在2个不同的环境中存在相同的问题

  1. 在AWS中运行的k8s集群的状态为

    失败

    $ kubectl描述部署prometheus -n prometheus

  

来自服务器的错误(未找到):not deployments.extensions“ prometheus”   找到

$ hes ls --all

var txtToBeCopied ='i am a text that should be copy';

    if (prd.checked == true) {
                Swal.fire({
                    type: 'success',
                    html: txtToBeCopied,
                    width: 'auto',
                    confirmButtonText: 'Copy URLs',
                },
 function(isConfirm){

   if (isConfirm){

     /* Get the text field */
  var copyText = txtToBeCopied;

  /* Select the text field */
  copyText.select();
  copyText.setSelectionRange(0, 99999); /*For mobile devices*/

  /* Copy the text inside the text field */
  document.execCommand("copy");


    }
 })
 }
  1. 在Hyper V上运行的k8s集群

    admin1 @ POC-k8s-master:〜/ poc-cog $ helm init --wait

    curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > /tmp/get_helm.sh
    chmod 700 /tmp/get_helm.sh
    DESIRED_VERSION=v2.8.2 /tmp/get_helm.sh
    helm init --wait
    kubectl --namespace=kube-system create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
    helm ls
    cd ~/
    git clone https://github.com/kubernetes/charts
    cd charts
    git checkout efdcffe0b6973111ec6e5e83136ea74cdbe6527d
    cd ../
    vi prometheus-values.yml
    prometheus-values.yml:
    
    alertmanager:
        persistentVolume:
            enabled: false
    server:
        persistentVolume:
            enabled: false
    Then run:
    
    helm install -f prometheus-values.yml charts/stable/prometheus --name prometheus --namespace prometheus
    vi grafana-values.yml
    grafana-values.yml:
    
    adminPassword: password
    Then run:
    
    helm install -f grafana-values.yml charts/stable/grafana/ --name grafana --namespace grafana
    vi grafana-ext.yml
    grafana-ext.yml:
    
    kind: Service
    apiVersion: v1
    metadata:
      namespace: grafana
      name: grafana-ext
    spec:
      type: NodePort
      selector:
        app: grafana
      ports:
      - protocol: TCP
        port: 3000
        nodePort: 8080
    Then run:
    
    kubectl apply -f grafana-ext.yml
    You can check on the status of the prometheus and grafana pods with these commands:
    
    kubectl get pods -n prometheus
    kubectl get pods -n grafana
    When setting up your dastasource in grafana, use this url:
    
    http://prometheus-server.prometheus.svc.cluster.local
    

admin1 @ POC-k8s-master:〜/ .helm $ kubectl获取-o宽的节点

   NAME            REVISION        UPDATED                         STATUS          CHART                   NAMESPACE
grafana         1               Tue Dec 17 12:26:32 2019        DEPLOYED        grafana-1.8.0           grafana
prometheus      1               Wed Dec 18 10:24:58 2019        FAILED          prometheus-9.5.4        prometheus

admin @ ip-172-20-49-150:〜/ dev-migration / stage $头盔安装稳定/普罗米修斯

    $HELM_HOME has been configured at /home/admin1/.helm.
    Error: error installing: the server could not find the requested resource
  

admin @ ip-172-20-49-150:〜/ dev-migration / stage $ kubectl   --namespace = kube-system创建clusterrolebinding-cluster-admin附加组件--clusterrole = cluster-admin --serviceaccount = kube-system:default

NAME             STATUS   ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
poc-k8s-master   Ready    master   24d   v1.16.3   192.168.137.2   <none>        Ubuntu 16.04.6 LTS   4.4.0-62-generic   docker://19.3.5
poc-k8s-node1    Ready    <none>   24d   v1.16.3   192.168.137.3   <none>        Ubuntu 16.04.6 LTS   4.4.0-62-generic   docker://18.6.2

admin @ ip-172-20-49-150:〜/ dev-migration / stage $掌控普罗米修斯

Error: release loping-owl failed: clusterroles.rbac.authorization.k8s.io "loping-owl-prometheus-kube-state-metrics" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["persistentvolumeclaims"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["persistentvolumeclaims"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["resourcequotas"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["resourcequotas"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["replicationcontrollers"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["replicationcontrollers"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["limitranges"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["limitranges"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["persistentvolumeclaims"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["persistentvolumeclaims"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["persistentvolumes"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["persistentvolumes"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["daemonsets"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["daemonsets"], APIGroups:["extensions"], Verbs:["watch"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["watch"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["watch"]} PolicyRule{Resources:["replicasets"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["replicasets"], APIGroups:["extensions"], Verbs:["watch"]} PolicyRule{Resources:["daemonsets"], APIGroups:["apps"], Verbs:["get"]} PolicyRule{Resources:["daemonsets"], APIGroups:["apps"], Verbs:["list"]} PolicyRule{Resources:["daemonsets"], APIGroups:["apps"], Verbs:["watch"]} PolicyRule{Resources:["deployments"], APIGroups:["apps"], Verbs:["get"]} PolicyRule{Resources:["deployments"], APIGroups:["apps"], Verbs:["list"]} PolicyRule{Resources:["deployments"], APIGroups:["apps"], Verbs:["watch"]} PolicyRule{Resources:["statefulsets"], APIGroups:["apps"], Verbs:["get"]} PolicyRule{Resources:["statefulsets"], APIGroups:["apps"], Verbs:["list"]} PolicyRule{Resources:["statefulsets"], APIGroups:["apps"], Verbs:["watch"]} PolicyRule{Resources:["cronjobs"], APIGroups:["batch"], Verbs:["list"]} PolicyRule{Resources:["cronjobs"], APIGroups:["batch"], Verbs:["watch"]} PolicyRule{Resources:["jobs"], APIGroups:["batch"], Verbs:["list"]} PolicyRule{Resources:["jobs"], APIGroups:["batch"], Verbs:["watch"]} PolicyRule{Resources:["horizontalpodautoscalers"], APIGroups:["autoscaling"], Verbs:["list"]} PolicyRule{Resources:["horizontalpodautoscalers"], APIGroups:["autoscaling"], Verbs:["watch"]} PolicyRule{Resources:["poddisruptionbudgets"], APIGroups:["policy"], Verbs:["list"]} PolicyRule{Resources:["poddisruptionbudgets"], APIGroups:["policy"], Verbs:["watch"]} PolicyRule{Resources:["certificatesigningrequests"], APIGroups:["certificates.k8s.io"], Verbs:["list"]} PolicyRule{Resources:["certificatesigningrequests"], APIGroups:["certificates.k8s.io"], Verbs:["watch"]}] user=&{system:serviceaccount:kube-system:tiller b474eab9-b753-11e9-83a0-06e8a114eea2 [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found, clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]]

0 个答案:

没有答案