我设置了一个elk堆栈来在本地使用日志文件;现在我正在尝试添加filebeat,它将被输出到logstash进行过滤,然后再被索引到elasticsearch中。这是我的配置 filebeat.yml:
prospectors:
# Each - is a prospector. Below are the prospector specific configurations
-
paths:
- /var/samplelogs/wwwlogs/framework*.log
input_type: log
document_type: framework
logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
logging:
to_syslog: true
这是logstash配置:
input {
beats {
port => 5044
}
}
filter {
if [type] == "framework" {
grok {
patterns_dir => "/etc/logstash/conf.d/patterns"
match => {'message' => "\[%{WR_DATE:logtime}\] \[error\] \[app %{WORD:application}\] \[client %{IP:client}\] \[host %{HOSTNAME:host}\] \[uri %{URIPATH:resource}\] %{GREEDYDATA:error_message}"}
}
date {
locale => "en"
match => [ "logtime", "EEE MMM dd HH:mm:ss yyyy" ]
}
}
}
output {
elasticsearch {
host => "localhost"
port => "9200"
protocol => "http"
# manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
当我使用--configtest时,这个logstash配置检查好了。 filebeat启动没问题,但我在logstash.log中遇到以下错误:
{:timestamp=>"2016-03-09T12:26:58.976000-0700", :message=>["INFLIGHT_EVENTS_REPORT", "2016-03-09T12:26:58-07:00", {"input_to_filter"=>20, "filter_to_output"=>20, "outputs"=>[]}], :level=>:warn}
{:timestamp=>"2016-03-09T12:27:03.977000-0700", :message=>["INFLIGHT_EVENTS_REPORT", "2016-03-09T12:27:03-07:00", {"input_to_filter"=>20, "filter_to_output"=>20, "outputs"=>[]}], :level=>:warn}
{:timestamp=>"2016-03-09T12:27:08.060000-0700", :message=>"Got error to send bulk of actions: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master];", :level=>:error}
{:timestamp=>"2016-03-09T12:27:08.060000-0700", :message=>"Failed to flush outgoing items", :outgoing_count=>1, :exception=>"Java::OrgElasticsearchClusterBlock::ClusterBlockException", :backtrace=>["org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(org/elasticsearch/cluster/block/ClusterBlocks.java:151)", "org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(org/elasticsearch/cluster/block/ClusterBlocks.java:141)", "org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(org/elasticsearch/action/bulk/TransportBulkAction.java:215)", "org.elasticsearch.action.bulk.TransportBulkAction.access$000(org/elasticsearch/action/bulk/TransportBulkAction.java:67)", "org.elasticsearch.action.bulk.TransportBulkAction$1.onFailure(org/elasticsearch/action/bulk/TransportBulkAction.java:153)", "org.elasticsearch.action.support.TransportAction$ThreadedActionListener$2.run(org/elasticsearch/action/support/TransportAction.java:137)", "java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1142)", "java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:617)", "java.lang.Thread.run(java/lang/Thread.java:745)"], :level=>:warn}
{:timestamp=>"2016-03-09T12:27:08.977000-0700", :message=>["INFLIGHT_EVENTS_REPORT", "2016-03-09T12:27:08-07:00", {"input_to_filter"=>20, "filter_to_output"=>20, "outputs"=>[]}], :level=>:warn}
{:timestamp=>"2016-03-09T12:27:13.977000-0700", :message=>["INFLIGHT_EVENTS_REPORT", "2016-03-09T12:27:13-07:00", {"input_to_filter"=>20, "filter_to_output"=>20, "outputs"=>[]}], :level=>:warn}
这些错误一再重复。
在elasticsearch日志中有一个错误illegalargumentexception:空文本。
我尝试将logstash输出配置中的协议更改为" node"。
它看起来像弹性搜索无法到达,但它正在运行:
$ curl localhost:9200
{
"status" : 200,
"name" : "Thena",
"version" : {
"number" : "1.1.2",
"build_hash" : "e511f7b28b77c4d99175905fac65bffbf4c80cf7",
"build_timestamp" : "2014-05-22T12:27:39Z",
"build_snapshot" : false,
"lucene_version" : "4.7"
},
"tagline" : "You Know, for Search"
}
这是我第一次尝试使用logstash。谁能指出我正确的方向?
答案 0 :(得分:0)
我能够让我的筹码工作。每个人的评论都是重点,但在这种情况下,它恰好是一个我仍然不完全理解的配置调整
在日志存储输出配置中,在elasticsearch {}选项中,我注释掉了端口和协议(设置为9200和HTTP)并且它工作正常。我第一次尝试修复是删除协议选项,因此默认情况下使用节点协议。当这不起作用时我也删除了协议选项。协议的默认值是'node',所以看起来我根本无法通过HTTP工作,我忘了删除端口选项。去掉后它们都工作了。
这可能不会对将来的人有所帮助,但是如果你打算使用节点协议,请确保你不要忘记从配置中删除端口选项 - 至少这是我认为我遇到的问题。