这是我的prometheus.yml文件,我使用kubectl create configmap prometheus-server-config --from-file=prometheus.yml
global:
scrape_interval: 5s
evaluation_interval: 5s
scrape_configs:
- job_name: 'goserver'
scheme: http
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- api_servers:
- 'https://kubernetes.default.svc'
in_cluster: true
role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_app]
regex: goserver
action: keep
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_pod_ready]
action: replace
target_label: kubernetes_pod_ready
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: (.+):(?:\d+);(\d+)
replacement: ${1}:${2}
target_label: __address__
以下是deployment-prometheus.yaml文件
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: prometheus-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
- name: prometheus
image: prom/prometheus:latest
args:
- "-config.file=/etc/prometheus/conf/prometheus.yml"
# Metrics are stored in an emptyDir volume which
# exists as long as the Pod is running on that Node.
# The data in an emptyDir volume is safe across container crashes.
- "-storage.local.path=/prometheus/"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-server-volume
mountPath: /etc/prometheus/conf/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-server-volume
configMap:
name: prometheus-server-config
- name: prometheus-storage-volume
emptyDir: {} # containers in the Pod can all read and write the same files here.
但无论何时使用kubectl create -f deployment-prometheus.yaml
创建部署,pod状态都会显示CrashLoopBackOff。
我经历了prometheus-kubernetes example,我正在minikube中运行我的集群。
这应该是什么原因?
答案 0 :(得分:1)
我在AWS上运行群集,以下教程适用于我https://itnext.io/kubernetes-monitoring-with-prometheus-in-15-minutes-8e54d1de2e13。让我们看看你是否可以适应它。基本上,我使用helm来安装coreos/kube-prometheus
。
# initialize tiller account
kubectl create serviceaccount -n kube-system tiller
kubectl create clusterrolebinding tiller-binding --clusterrole=cluster-admin --serviceaccount kube-system:tiller
helm init --service-account tiller
# install Prometheus app
sleep 1m
helm repo add coreos https://s3-eu-west-1.amazonaws.com/coreos-charts/stable/
helm install coreos/prometheus-operator --name prometheus-operator --namespace monitoring
helm install coreos/kube-prometheus --name kube-prometheus --namespace monitoring --set global.rbacEnable=true --set prometheus.resources.requests.memory=300Mi
# forward ports
kubectl port-forward -n monitoring prometheus-kube-prometheus-0 9090 &
kubectl port-forward $(kubectl get pods --selector=app=kube-prometheus-grafana -n monitoring --output=jsonpath="{.items..metadata.name}") -n monitoring 3000 &
kubectl port-forward -n monitoring alertmanager-kube-prometheus-0 9093 &
答案 1 :(得分:1)
Oneliner
kubectl apply --filename https://raw.githubusercontent.com/giantswarm/kubernetes-prometheus/master/manifests-all.yaml
必备条件
kubectl create namespace monitoring
结果
参考
https://github.com/giantswarm/kubernetes-prometheus#quick-start
@Utkarsh Mani Tripathi