我试图在Kubernetes集群(相同名称空间)上运行Elasticsearch和Kibana。我为Elasticsearch和Kibana创建了Pod和服务。当我进入elasticsearch网站(http://localhost:8001/api/v1/namespaces/default/pods/elasticsearch/proxy/)时,一切似乎都很好,但是当我进入Kibana的网站时,我看到“ Kibana无法正确加载。请检查服务器输出以获取更多信息。”。 / strong>
Kibana窗格的日志如下:
{"type":"error","@timestamp":"2019-03-04T19:27:21Z","tags":["warning","stats-collection"],"pid":1,"level":"error","error":{"message":"Request Timeout after 30000ms","name":"Error","stack":"Error: Request Timeout after 30000ms\n at /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:355:15\n at Timeout.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:384:7)\n at ontimeout (timers.js:436:11)\n at tryOnTimeout (timers.js:300:5)\n at listOnTimeout (timers.js:263:5)\n at Timer.processTimers (timers.js:223:10)"},"message":"Request Timeout after 30000ms"}
这些是yaml文件:
deployment_elasticsearch.yaml:
apiVersion: v1
kind: Pod
metadata:
name: elasticsearch
labels:
service: elasticsearch
spec:
ports:
containers:
- name: elasticsearch
image: elasticsearch:6.6.1
ports:
- containerPort: 9200
- containerPort: 9300
env:
- name: discovery.type
value: "single-node"
deployment_elasticsearch_service.yaml:
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
service: elasticsearch
spec:
ports:
- port: 9200
name: serving
- port: 9300
name: node-to-node
selector:
service: elasticsearch
deployment_kibana.yaml:
apiVersion: v1
kind: Pod
metadata:
name: kibana
labels:
service: kibana
spec:
ports:
containers:
- name: kibana
image: kibana:6.6.1
ports:
- containerPort: 5601
deployment_kibana_service.yaml:
apiVersion: v1
kind: Service
metadata:
name: kibana
labels:
service: kibana
spec:
ports:
- port: 5601
name: serving
selector:
service: kibana
另外,当我进入kibana pod并运行“ $ curl http://elasticsearch:9200”时,我得到了elasticsearch主页(因此我认为kibana可以达到elasticsearch)。
编辑 这是kibana的grep错误日志:
{"type":"log","@timestamp":"2019-03-04T22:41:16Z","tags":["status","plugin:index_management@6.6.1","error"],"pid":1,"state":"red","message":"Status changed from green to red - Request Timeout after 30000ms","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2019-03-04T22:41:16Z","tags":["status","plugin:index_lifecycle_management@6.6.1","error"],"pid":1,"state":"red","message":"Status changed from green to red - Request Timeout after 30000ms","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2019-03-04T22:41:16Z","tags":["status","plugin:rollup@6.6.1","error"],"pid":1,"state":"red","message":"Status changed from green to red - Request Timeout after 30000ms","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2019-03-04T22:41:16Z","tags":["status","plugin:remote_clusters@6.6.1","error"],"pid":1,"state":"red","message":"Status changed from green to red - Request Timeout after 30000ms","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2019-03-04T22:41:16Z","tags":["status","plugin:cross_cluster_replication@6.6.1","error"],"pid":1,"state":"red","message":"Status changed from green to red - Request Timeout after 30000ms","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2019-03-04T22:41:16Z","tags":["status","plugin:reporting@6.6.1","error"],"pid":1,"state":"red","message":"Status changed from green to red - Request Timeout after 30000ms","prevState":"green","prevMsg":"Ready"}
{"type":"log","@timestamp":"2019-03-04T22:41:34Z","tags":["spaces","error"],"pid":1,"message":"Unable to navigate to space \"default\", redirecting to Space Selector. Error: Request Timeout after 30000ms"}
{"type":"log","@timestamp":"2019-03-04T22:41:41Z","tags":["spaces","error"],"pid":1,"message":"Unable to navigate to space \"default\", redirecting to Space Selector. Error: Request Timeout after 30000ms"}
通过在线研究,我认为问题在于els和kibana无法彼此交谈。你知道为什么吗?
编辑2,描述日志:
kubectl describe pod kibana
Name: kibana
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/10.0.2.15
Start Time: Tue, 05 Mar 2019 00:21:23 +0200
Labels: service=kibana
Annotations: <none>
Status: Running
IP: 172.17.0.5
Containers:
kibana:
Container ID: docker://7eecb30b2f197120706d790e884db44696d5d1a30d3ec48a9ca2a6255eca7e8a
Image: kibana:6.6.1
Image ID: docker-pullable://kibana@sha256:a2b329d8903978069632da8aa85cc5199c5ab2cf289c48b7851bafd6ee58bbea
Port: 5601/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 05 Mar 2019 00:21:24 +0200
Ready: True
Restart Count: 0
Environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-q25px (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-q25px:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-q25px
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 51m default-scheduler Successfully assigned default/kibana to minikube
Normal Pulled 51m kubelet, minikube Container image "kibana:6.6.1" already present on machine
Normal Created 51m kubelet, minikube Created container
Normal Started 51m kubelet, minikube Started container
答案 0 :(得分:1)
我在集群中复制了您的设置。并且kibana和elasticsearch之间的连接很好。
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
elasticsearch 1/1 Running 0 37m 10.244.1.8 worker-12 <none> <none>
kibana 1/1 Running 0 25m 10.244.3.10 worker-14 <none> <none>
从kibana ping到elasticsearch
bash-4.2$ ping 10.244.1.8
PING 10.244.1.8 (10.244.1.8) 56(84) bytes of data.
64 bytes from 10.244.1.8: icmp_seq=1 ttl=62 time=0.705 ms
64 bytes from 10.244.1.8: icmp_seq=2 ttl=62 time=0.501 ms
从elasticsearch到kibana
[root@elasticsearch elasticsearch]# ping 10.244.3.10
PING 10.244.3.10 (10.244.3.10) 56(84) bytes of data.
64 bytes from 10.244.3.10: icmp_seq=1 ttl=62 time=0.444 ms
64 bytes from 10.244.3.10: icmp_seq=2 ttl=62 time=0.462 ms
您面临的问题是由于使用了主机名。 kibana.yml在弹性网址(http://elasticsearch:9200-)中使用“ elasticsearch”。 kibana容器无法解析名称“ elasticsearch”。
因此,您必须在/ etc / hosts文件中添加一个条目,并提及“ elasticsearch”的IP地址。例如就我而言,在/ etc / hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.244.3.10 kibana
10.244.1.8 elasticsearch
那应该可以解决您的问题。
但是,这并非易事,您将无法更改该文件,必须使用--add-host选项重建映像或运行容器。 look here for --add-host
一个更简单的解决方法是将kibana.yml更改为这样,
# Default Kibana configuration from kibana-docker.
server.name: kibana
server.host: "0"
elasticsearch.url: http://10.244.1.8:9200 #enter your elasticsearch container IP
xpack.monitoring.ui.container.elasticsearch.enabled: true
配置Elasticsearch容器的正确IP地址,然后重新启动kibana容器。反之亦然适用于Elasticsearch容器。
选择。
进一步编辑。
要从k8s yml更改主机文件,
请先启动弹性服务/集群,
[root@controller-11 test-dir]# kubectl get services elasticsearch -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
elasticsearch ClusterIP 10.103.254.157 <none> 9200/TCP,9300/TCP 153m service=elasticsearch
然后继续编辑具有elasticsearch服务IP地址的kibana.yml文件。看起来像这样,
apiVersion: v1
kind: Pod
metadata:
name: kibana
labels:
service: kibana
spec:
hostAliases:
- ip: "10.103.254.157"
hostnames:
- "elasticsearch"
ports:
containers:
- name: kibana
image: kibana:6.6.1
ports:
- containerPort: 5601
登录到您的kibana容器并检出/ etc / hosts文件,看起来像这样
bash-4.2$ cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.244.2.2 kibana
# Entries added by HostAliases.
10.103.254.157 elasticsearch
然后尝试联系弹性服务器,看起来像这样,
bash-4.2$ curl http://elasticsearch:9200
{
"name" : "tyqNRro",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "tFmM2Nq9RDmGlDy6G2FUZw",
"version" : {
"number" : "6.6.1",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "1fd8f69",
"build_date" : "2019-02-13T17:10:04.160291Z",
"build_snapshot" : false,
"lucene_version" : "7.6.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
我应该这样做。
进一步编辑。
经过进一步调查,看起来您使用的配置应该可以正常工作,而我建议不做任何更改。 看来您的k8s elasticsearch服务配置不正确。如果服务配置正确,那么我们应该找到配置到您的弹性搜索容器的端点。看起来应该是这样,
root@server1d:~# kubectl describe service elasticsearch
Name: elasticsearch
Namespace: default
Labels: service=elasticsearch
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"service":"elasticsearch"},"name":"elasticsearch","namespace"...
Selector: service=elasticsearch
Type: ClusterIP
IP: 10.102.227.86
Port: serving 9200/TCP
TargetPort: 9200/TCP
Endpoints: 10.244.1.9:9200
Port: node-to-node 9300/TCP
TargetPort: 9300/TCP
Endpoints: 10.244.1.9:9300
Session Affinity: None
Events: <none>