Filebeat连接Logstash总是i / o超时

时间:2017-05-11 05:58:54

标签: elasticsearch logstash elastic-stack filebeat

在更改elasticsearch的密码之前,Filebeat运行良好。顺便说一句,我使用docker-compose来启动服务,这里有一些关于我的filebeat的信息。 控制台日志:

filebeat    | 2017/05/11 05:21:33.020851 beat.go:285: INFO Home path: [/] Config path: [/] Data path: [//data] Logs path: [//logs]
filebeat    | 2017/05/11 05:21:33.020903 beat.go:186: INFO Setup Beat: 
filebeat; Version: 5.3.0
filebeat    | 2017/05/11 05:21:33.021019 logstash.go:90: INFO Max Retries set to: 3
filebeat    | 2017/05/11 05:21:33.021097 outputs.go:108: INFO Activated 
logstash as output plugin.
filebeat    | 2017/05/11 05:21:33.021908 publish.go:295: INFO Publisher name: fd2f326e51d9
filebeat    | 2017/05/11 05:21:33.022092 async.go:63: INFO Flush Interval set to: 1s
filebeat    | 2017/05/11 05:21:33.022104 async.go:64: INFO Max Bulk Size set to: 2048
filebeat    | 2017/05/11 05:21:33.022220 modules.go:93: ERR Not loading modules. Module directory not found: /module
filebeat    | 2017/05/11 05:21:33.022291 beat.go:221: INFO filebeat start running.
filebeat    | 2017/05/11 05:21:33.022334 registrar.go:68: INFO No registry file found under: /data/registry. Creating a new registry file.
filebeat    | 2017/05/11 05:21:33.022570 metrics.go:23: INFO Metrics logging every 30s
filebeat    | 2017/05/11 05:21:33.025878 registrar.go:106: INFO Loading registrar data from /data/registry
filebeat    | 2017/05/11 05:21:33.025918 registrar.go:123: INFO States Loaded from registrar: 0
filebeat    | 2017/05/11 05:21:33.025970 crawler.go:38: INFO Loading Prospectors: 1
filebeat    | 2017/05/11 05:21:33.026119 prospector_log.go:61: INFO Prospector with previous states loaded: 0
filebeat    | 2017/05/11 05:21:33.026278 prospector.go:124: INFO Starting prospector of type: log; id: 5816422928785612348 
filebeat    | 2017/05/11 05:21:33.026299 crawler.go:58: INFO Loading and starting Prospectors completed. Enabled prospectors: 1
filebeat    | 2017/05/11 05:21:33.026323 registrar.go:236: INFO Starting Registrar
filebeat    | 2017/05/11 05:21:33.026364 sync.go:41: INFO Start sending events to output
filebeat    | 2017/05/11 05:21:33.026394 spooler.go:63: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
filebeat    | 2017/05/11 05:21:33.026731 log.go:91: INFO Harvester started for file: /data/logs/biz.log
filebeat    | 2017/05/11 05:22:03.023313 metrics.go:39: INFO Non-zero metrics in the last 30s: filebeat.harvester.open_files=1 
filebeat.harvester.running=1 filebeat.harvester.started=1 libbeat.publisher.published_events=98 registrar.writes=1
filebeat    | 2017/05/11 05:22:08.028292 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 47.93.121.126:5044: i/o timeout
filebeat    | 2017/05/11 05:22:33.023370 metrics.go:34: INFO No non-zero metrics in the last 30s
filebeat    | 2017/05/11 05:22:39.028840 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 47.93.121.126:5044: i/o timeout
filebeat    | 2017/05/11 05:23:03.022906 metrics.go:34: INFO No non-zero metrics in the last 30s
filebeat    | 2017/05/11 05:23:11.029517 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 47.93.121.126:5044: i/o timeout
filebeat    | 2017/05/11 05:23:33.023450 metrics.go:34: INFO No non-zero metrics in the last 30s
filebeat    | 2017/05/11 05:23:45.030202 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 47.93.121.126:5044: i/o timeout
filebeat    | 2017/05/11 05:24:03.022864 metrics.go:34: INFO No non-zero metrics in the last 30s
filebeat    | 2017/05/11 05:24:23.030749 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 47.93.121.126:5044: i/o timeout
filebeat    | 2017/05/11 05:24:33.024029 metrics.go:34: INFO No non-zero metrics in the last 30s
filebeat    | 2017/05/11 05:25:03.023338 metrics.go:34: INFO No non-zero metrics in the last 30s
filebeat    | 2017/05/11 05:25:09.031348 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 47.93.121.126:5044: i/o timeout
filebeat    | 2017/05/11 05:25:33.023976 metrics.go:34: INFO No non-zero metrics in the last 30s
filebeat    | 2017/05/11 05:26:03.022900 metrics.go:34: INFO No non-zero metrics in the last 30s
filebeat    | 2017/05/11 05:26:11.032346 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 47.93.121.126:5044: i/o timeout
filebeat    | 2017/05/11 05:26:33.022870 metrics.go:34: INFO No non-zero metrics in the last 30s

filebeat.yml:

filebeat:
  prospectors:
    -
  paths:
    - /data/logs/*.log
  input_type: log
  document_type: biz-log
registry_file: /etc/registry/mark
output:
  logstash:
    enabled: true
    hosts: ["logstash:5044"]

搬运工-compose.yml:

version: '2'
services:
filebeat:
  build: ./
  container_name: filebeat
  restart: always
  network_mode: "bridge"
  extra_hosts:
    - "logstash:47.93.121.126"
  volumes:
    - ./conf/filebeat.yml:/filebeat.yml
    - /mnt/logs/appserver/app/biz:/data/logs
    - ./registry:/data

2 个答案:

答案 0 :(得分:0)

注册表文件存储Filebeat用于跟踪上次读取位置的状态和位置信息。 因此,您可以尝试更新或删除注册表文件

cd /var/lib/filebeat
sudo mv registry registry.bak
sudo service filebeat restart

答案 1 :(得分:0)

遇到了类似的问题,我最终意识到罪魁祸首不是Filebeat而是Logstash。

Logstash的SSL配置未包含所有必需的属性。使用以下声明进行设置可以解决该问题:

input {
    beats {
        port => "{{ logstash_port }}"
        ssl => true
        ssl_certificate_authorities => [ "{{ tls_certificate_authority_file }}" ]
        ssl_certificate => "{{ tls_certificate_file }}"
        ssl_key => "{{ tls_certificate_key_file }}"
        ssl_verify_mode => "force_peer"
    }
}

以上示例适用于Ansible,请记住以正确的值替换{{}}之间的占位符。