我正在运行this的ELK堆栈实现,该实现非常简单且易于配置。
我可以使用netcat将TCP输入推入堆栈,如下所示:
nc localhost 5000 < /Users/me/path/to/logs/appOne.log
nc localhost 5000 < /Users/me/path/to/logs/appOneStackTrace.log
nc localhost 5000 < /Users/me/path/to/logs/appTwo.log
nc localhost 5000 < /Users/me/path/to/logs/appTwoStackTrace.log
但是我无法获取Logstash来读取我在配置中指定的文件路径:
input {
tcp {
port => 5000
}
file {
path => [
"/Users/me/path/to/logs/appOne.log",
"/Users/me/path/to/logs/appOneStackTrace.log",
"/Users/me/path/to/logs/appTwo.log",
"/Users/me/path/to/logs/appTwoStackTrace.log"
]
type => "log"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
}
}
这是堆栈中有关logstash输入的启动输出:
logstash_1 | [2019-01-28T17:44:33,206][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:5000", :ssl_enable=>"false"}
logstash_1 | [2019-01-28T17:44:34,037][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_a1605b28f1bc77daf785a8805c32f578", :path=>["/Users/me/path/to/logs/appOne.log", "/Users/me/path/to/logs/appOneStackTrace.log", "/Users/me/path/to/logs/appTwo.log", "/Users/me/path/to/logs/appTwoStackTrace.log"]}
没有迹象表明管道开始出现任何问题。
我还检查了自显示TCP输入以来日志文件是否已更新,并且已经更新。 ELK堆栈中最后一个特定于Logstash的日志来自启动或TCP输入。
以下是我整个Logstash启动日志记录,以帮助您:
logstash_1 | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash_1 | [2019-01-29T13:32:19,391][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
logstash_1 | [2019-01-29T13:32:19,415][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.5.4"}
logstash_1 | [2019-01-29T13:32:23,989][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
logstash_1 | [2019-01-29T13:32:24,648][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2019-01-29T13:32:24,908][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
logstash_1 | [2019-01-29T13:32:25,046][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
logstash_1 | [2019-01-29T13:32:25,051][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
logstash_1 | [2019-01-29T13:32:25,108][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
logstash_1 | [2019-01-29T13:32:25,229][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
logstash_1 | [2019-01-29T13:32:25,276][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
logstash_1 | [2019-01-29T13:32:25,327][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:5000", :ssl_enable=>"false"}
logstash_1 | [2019-01-29T13:32:25,924][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_143c07d174c46eeab78b902edb3b1289", :path=>["/Users/me/path/to/logs/appOne.log", "/Users/me/path/to/logs/appOneStackTrace.log", "/Users/me/path/to/logs/appTwo.log", "/Users/me/path/to/logs/appTwoStackTrace.log"]}
logstash_1 | [2019-01-29T13:32:25,976][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x4d1515ce run>"}
logstash_1 | [2019-01-29T13:32:26,088][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
logstash_1 | [2019-01-29T13:32:26,106][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
logstash_1 | [2019-01-29T13:32:26,432][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
答案 0 :(得分:0)
我发现了问题-我需要将日志文件从主机映射到容器(Docker noob)。我在Logstash配置中指定的本地路径对于TCP来说很好,但是在没有卷映射的容器中内部不可用。
首先,我在Dockerfile中为Logstash创建了容器的内部日志目录:
RUN mkdir /usr/share/appOneLogs
RUN mkdir /usr/share/appTwoLogs
然后我将配置了Logstash的docker-elk / docker-compose.yml文件中的主机日志目录卷映射到其中:
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
- /Users/me/path/to/appOne/logs:/usr/share/appOneLogs # this bit
- /Users/me/path/to/appTwo/logs:/usr/share/appTwoLogs # and this bit
ports:
- "5000:5000"
...
最后,我用在Dockerfile中创建的目录替换了logstash / pipelines / logstash.config中的路径:
file {
path => [
"/usr/share/appOneLogs",
"/usr/share/appTwoLogs",
]
}
还要注意,我从文件输入定义中删除了start_position => "beginning"
,因为它覆盖了treat files like live streams and thus start at the end的默认行为。