Logstash给Elasticsearch无法访问错误

时间:2018-11-02 15:33:01

标签: elasticsearch logstash

我正在尝试将Logstash与Elasticsearch连接,但无法使其正常工作。 我的Elasticsearch在localhost:9200上运行良好,我可以卷曲它。

我的logstash.config看起来像这样:

http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.url: http://localhost:9200

我的logstash-sample.conf:

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
}

我用来启动容器。

docker run                         --name logstash             -p 5044:5044                                      -e "discovery.type=single-node"  docker.elastic.co/logstash/logstash:6.4.2

但是我得到以下带有错误的日志:

Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2018-11-02T15:30:44,622][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2018-11-02T15:30:44,814][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.2"}
[2018-11-02T15:30:48,350][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s", hosts=>[http://localhost:9200], sniffing=>false, manage_template=>false, id=>"2196aa69258f6adaaf9506d8988cc76ab153e658434074dcf2e424e0aca0d381", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_1afa70a3-eaef-4cf5-9762-0d759d720d1c", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-11-02T15:30:48,710][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50}
[2018-11-02T15:30:50,318][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-11-02T15:30:51,394][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2018-11-02T15:30:51,462][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x688d6788 run>"}
[2018-11-02T15:30:51,698][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-11-02T15:30:51,972][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-11-02T15:30:51,982][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-11-02T15:30:52,194][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-11-02T15:30:52,218][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2018-11-02T15:30:52,322][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-11-02T15:30:52,322][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-11-02T15:30:52,332][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-11-02T15:30:52,378][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused) {:url=>http://localhost:9200/, :error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2018-11-02T15:30:52,509][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:293:in `perform_request_to_url'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:278:in `block in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:373:in `with_connection'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:277:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:285:in `block in get'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:162:in `get'", "/usr/share/logstash/x-pack/lib/license_checker/license_reader.rb:28:in `fetch_xpack_info'", "/usr/share/logstash/x-pack/lib/license_checker/license_manager.rb:40:in `fetch_xpack_info'", "/usr/share/logstash/x-pack/lib/license_checker/license_manager.rb:27:in `initialize'", "/usr/share/logstash/x-pack/lib/license_checker/licensed.rb:37:in `setup_license_checker'", "/usr/share/logstash/x-pack/lib/monitoring/inputs/metrics.rb:56:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:242:in `register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:253:in `block in register_plugins'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:253:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:396:in `start_inputs'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:294:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:200:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:160:in `block in start'"]}
[2018-11-02T15:30:52,776][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0x4bac2ad3 sleep>"}
[2018-11-02T15:30:52,842][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:main, :".monitoring-logstash"], :non_running_pipelines=>[]}
[2018-11-02T15:30:52,865][ERROR][logstash.inputs.metrics  ] X-Pack is installed on Logstash but not on Elasticsearch. Please install X-Pack on Elasticsearch to use the monitoring feature. Other features may be available.
[2018-11-02T15:30:53,119][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2018-11-02T15:30:57,213][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-11-02T15:30:57,222][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-11-02T15:30:57,347][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}

而且这种情况一直持续下去。 我将非常感谢您的帮助,并有义务。 谢谢。

2 个答案:

答案 0 :(得分:1)

正如@Val注释所注意到的,本地主机位于容器内,它是容器,而不是您的HOST计算机。

如果Docker Compose适合您,则可以尝试'sebp/elk' Docker镜像,其中包含ElasticSearch,LogStash和Kibana。并使用此示例撰写文件

docker-compose.yml

version: "2.4"
services:
  elk:
    image: sebp/elk
    ports:
      - "5601:5601"
      - "9200:9200"
      - "5044:5044"
  1. 安装Docker-Compose
  2. 将示例'docker-compose.yml'保存在文件夹中
  3. 在同一文件夹中运行:$ docker-compose up

但是,如果要更改配置,则应该:

  1. 以该映像为基础创建自己的Dockerfile,并覆盖所需的conf文件
  2. 用户Docker Compose卷,可通过编辑docker-compose.yml而不是使用Dockerfile创建新的Docker映像来覆盖单个文件

替代

  1. 在容器配置中使用您的私有本地LAN ip,而不是“ localhost”,以便容器可以访问它。
  2. 使用这两个容器化系统,并且在“ docker run”上使用相同的网络,设置主机名,并在配置中使用这些主机名。
  3. 为每个系统(弹性和LogStash)使用Docker Compose一项服务,并在配置中使用服务名称(因为Docker Compose默认将服务名称用作主机名)
  4. 使用Docker Compose的原始建议,通过sebp / elk Docker Image提供一项服务,因此所有内容都在同一容器中,并且可以保留localhost conf

请记住,每个备选方案可能都有不同的方法来自定义配置。

答案 1 :(得分:0)

我认为问题的线索是此错误消息:

[2018-11-02T15:30:52,865][ERROR][logstash.inputs.metrics  ] X-Pack is installed on Logstash but not on Elasticsearch. Please install X-Pack on Elasticsearch to use the monitoring feature. Other features may be available.

您似乎已在Logstash上安装了X-Pack,但未在Elasticsearch节点上安装。尝试一下,然后重新运行Logstash。