我正在尝试将树莓派日志(IoT设备)聚合到在EKS中运行的Logstash / ElasticSearch中。
filebeat
已在EKS中运行,以聚合容器日志。
这是我的清单文件
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
namespace: kube-logging
labels:
app: logstash
data:
logstash.conf: |-
input {
tcp {
port => 5000
type => syslog
}
}
filter {
grok {
match => {"message" => "%{SYSLOGLINE}"}
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch:9200"]
index => "syslog-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
---
kind: Deployment
apiVersion: apps/v1beta1
metadata:
name: logstash
namespace: kube-logging
labels:
app: logstash
spec:
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.2.1
imagePullPolicy: Always
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
ports:
- name: logstash
containerPort: 5000
protocol: TCP
securityContext:
runAsUser: 0
resources:
limits:
memory: 800Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /usr/share/logstash/pipeline/logstash.conf
readOnly: true
subPath: logstash.conf
volumes:
- name: config
configMap:
defaultMode: 0600
name: logstash-config
---
kind: Service
apiVersion: v1
metadata:
name: logstash
namespace: kube-logging
labels:
app: logstash
spec:
selector:
app: logstash
clusterIP: None
ports:
- name: tcp-port
protocol: TCP
port: 5000
targetPort: 5000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: logstash-external
namespace: kube-logging
labels:
app: logstash
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/frontend-entry-points: tcp
spec:
rules:
- host: logstash.dev.domain.com
http:
paths:
- backend:
serviceName: logstash
servicePort: 5000
能够发送测试消息:
echo -n "test message" | nc logstash.dev.domain.com 5000
但是在logstash容器中看不到tcpdump port 5000
的任何内容。
如果我从logstash容器运行echo -n "test message" | nc logstash.dev.domain.com 5000
,那么我看到此消息在logstash容器上显示tcpdump port 5000
。
在任何容器中的EKS
内,我可以发送测试消息echo -n "test message 4" | nc -q 0 logstash 5000
,并由logstash
接收并发送到ElasticSearch
。
但不是来自集群外部。因此看来traefik
入口控制器是这里的问题。
我有用于EKS的traefik
入口控制器。
traefik.toml: |
defaultEntryPoints = ["http","https"]
logLevel = "INFO"
[entryPoints]
[entryPoints.http]
address = ":80"
compress = true
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.http.whiteList]
sourceRange = ["0.0.0.0/0""]
[entryPoints.https]
address = ":443"
compress = true
[entryPoints.https.tls]
[entryPoints.https.whiteList]
sourceRange = ["0.0.0.0/0"]
[entryPoints.tcp]
address = ":5000"
compress = true
和服务:
kind: Service
apiVersion: v1
metadata:
name: ingress-external
namespace: kube-system
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
externalTrafficPolicy: Local
selector:
app: traefik-ingress-lb
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
- name: tcp-5000
protocol: TCP
port: 5000
targetPort: 5000
这是怎么了?
答案 0 :(得分:0)
如果您以前从未使用过logstash,则可能需要手动创建logstash索引。数据不会出现在文件拍子下,因为elasticsearch不会从文件拍子中接收数据,而是会记录日志本身。我可能在这个答案中完全错了。 但是,如果您转到:
设置>索引样式>创建索引样式 然后继续输入logstash,在其中要求输入名称,然后从下面选择logstash,如下所示:
创建此文件后,您应该在“发现”页面上看到一个下拉菜单,显示logstash。在logstash下拉菜单下,您应该看到要推送的所有数据
您可能已经具有logstash索引设置,这可能根本不是问题
答案 1 :(得分:0)
elasticsearch输出中的主机设置对我来说可疑。如documentation中所述,必须在此处指定一个URI。您应该使用类似协议的实例指定
http:// s:// {IP或名称}:9200
就像在卷曲请求中一样。
我会尝试的。