我在Amazon EKS群集中连接FluentD安装时遇到问题,该安装将直接将数据发送到Azure中的ElasticSearch堆栈。我想像使用Filebeat一样使用证书(ca.pem,cert.pem和cert.key)而不是用户/密码身份验证来配置它。
我设法启动并运行了FluentD吊舱,并且RBAC可以正常工作,并且由于似乎不存在证书授权的文档,我尝试了一些尝试和错误,但是没有任何用处。
我的证书配置如下:
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: elasticsearch-azure
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd-role
namespace: elastisearch-azure
rules:
- apiGroups: [""]
resources:
- namespaces
- pods
- pods/logs
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: fluentd-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: fluentd-role
subjects:
- kind: ServiceAccount
name: fluentd
namespace: elasticsearch-azure
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd
namespace: elasticsearch-azure
labels:
k8s-app: fluentd-logging
version: v1
spec:
template:
metadata:
labels:
k8s-app: fluentd-logging
version: v1
spec:
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
envFrom:
- secretRef:
name: fluent-tls
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "{{server_namne}}"
- name: FLUENT_ELASTICSEARCH_PORT
value: "{port}"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "https"
# Option to configure elasticsearch plugin with self signed certs
# ================================================================
- name: FLUENT_ELASTICSEARCH_SSL_VERIFY
value: "true"
# Option to configure elasticsearch plugin with tls
# ================================================================
- name: FLUENT_ELASTICSEARCH_SSL_VERSION
value: "TLSv1_2"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: ssl
mountPath: /fluent-tls/ssl
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
# certificates folder for filebeat
- name: ssl
secret:
secretName: fluent-tls
我已经使用以下命令创建了机密:
kubectl create secret generic fluent-tls \
--from-file=ca_file=./chain.pem \
--from-file=cert_pem=./cert.pem \
--from-file=cert_key=./cert.key
运行Pod时出现的错误如下:
<match **>
@type elasticsearch
@id out_es
@log_level "info"
include_tag_key true
host "super-sercret-host.com"
port even-more-secret-portnumber
path ""
scheme https
ssl_verify false
ssl_version TLSv1_2
reload_connections false
reconnect_on_error true
reload_on_failure true
log_es_400_reason false
logstash_prefix "logstash"
logstash_format true
index_name "logstash"
type_name "fluentd"
<buffer>
flush_thread_count 8
flush_interval 5s
chunk_limit_size 2M
queue_limit_length 32
retry_max_interval 30
retry_forever true
</buffer>
</match>
</ROOT>
2019-12-16 14:51:30 +0000 [info]: starting fluentd-1.7.4 pid=6 ruby="2.6.5"
2019-12-16 14:51:30 +0000 [info]: spawn command to main: cmdline=["/usr/local/bin/ruby", "-Eascii-8bit:ascii-8bit", "/fluentd/vendor/bundle/ruby/2.6.0/bin/fluentd", "-c", "/fluentd/etc/fluent.conf", "-p", "/fluentd/plugins", "--gemfile", "/fluentd/Gemfile", "--under-supervisor"]
2019-12-16 14:51:31 +0000 [info]: gem 'fluent-plugin-concat' version '2.4.0'
2019-12-16 14:51:31 +0000 [info]: gem 'fluent-plugin-detect-exceptions' version '0.0.13'
2019-12-16 14:51:31 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '3.7.1'
2019-12-16 14:51:31 +0000 [info]: gem 'fluent-plugin-grok-parser' version '2.6.1'
2019-12-16 14:51:31 +0000 [info]: gem 'fluent-plugin-json-in-json-2' version '1.0.2'
2019-12-16 14:51:31 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '2.3.0'
2019-12-16 14:51:31 +0000 [info]: gem 'fluent-plugin-multi-format-parser' version '1.0.0'
2019-12-16 14:51:31 +0000 [info]: gem 'fluent-plugin-prometheus' version '1.6.1'
2019-12-16 14:51:31 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.0.1'
2019-12-16 14:51:31 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.2.0'
2019-12-16 14:51:31 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.2'
2019-12-16 14:51:31 +0000 [info]: gem 'fluentd' version '1.7.4'
2019-12-16 14:51:31 +0000 [info]: adding match pattern="fluent.**" type="null"
2019-12-16 14:51:31 +0000 [info]: adding filter pattern="kubernetes.**" type="kubernetes_metadata"
2019-12-16 14:51:31 +0000 [info]: adding match pattern="**" type="elasticsearch"
2019-12-16 14:51:34 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. SSL_connect returned=1 errno=0 state=error: sslv3 alert handshake failure (OpenSSL::SSL::SSLError)
2019-12-16 14:51:34 +0000 [warn]: #0 [out_es] Remaining retry: 14. Retry to communicate after 2 second(s).
2019-12-16 14:51:38 +0000 [warn]: #0 [out_es] Could not communicate to Elasticsearch, resetting connection and trying again. SSL_connect returned=1 errno=0 state=error: sslv3 alert handshake failure (OpenSSL::SSL::SSLError)
2019-12-16 14:51:38 +0000 [warn]: #0 [out_es] Remaining retry: 13. Retry to communicate after 4 second(s).
我知道,如果我只能将配置属性:ca_file,client_pem和client_key参数添加到标记中,它可能会起作用,但是到目前为止,我还没有做到这一点。任何帮助表示赞赏。
答案 0 :(得分:1)
谢谢!非常感谢;您的配置示例帮助解决了FLuentD和ES之间的SSL身份验证问题,现在也许可以为您提供帮助。
我正在使用Bitnami的FluentD使用Open Distro for Elasticsearch,却遇到了类似的错误。我的解决方案是使用部分配置,但是我必须将 host 更改为 hosts ,
hosts https://admin:admin@odfe-node1:9200
似乎必须在一行上指定用户名,协议和端口,就像必须在集群中指定多个主机一样。它为我工作。请查看FluentD docs,以获取更多参考。