我遇到了Logstash的问题,我用
启动了这个过程bin/logstash -f logstash.conf
并且它运行正常但不会从文件发送日志,直到我按ctrl + c终止进程,然后才将数据发送到elasticsearch。
我的问题是为什么我必须终止进程以开始将所有收集的数据发送到弹性文件?
logstash.conf:
input {
file {
type => logs
path => "/home/admin/logs/*"
start_position => beginning
sincedb_path => "/home/admin/sincedb"
ignore_older => 0
codec => multiline {
pattern => "^[0-2][0-3]:[0-5][0-9].*"
negate => "true"
what => "previous"
}
}
}
filter {
grok {
match => {
message => "%{NOTSPACE:date}\t+%{INT:done}\t+%{INT:idnumber}\t+SiteID=%{INT:SiteID};DateFrom=%{NOTSPACE:DateFrom};DateTo=%{NOTSPACE:DateTo};RoomCode=%{INT:RoomCode};RatePlanRoomID=%{INT:RatePlanRoomID};DaySetupIDs:%{NOTSPACE:DaySetupIDs};RatePlanID=%{INT:RatePlanID};RatePlanCode=%{INT:RatePlanCode};Calculation=%{WORD:Calculation};IsClosed=%{INT:IsClosed};BaseOccupancy=%{INT:BaseOccupancy};MaxOccupancy=%{INT:MaxOccupancy};MinLOS=%{INT:MinLOS};IsDirtyRate=%{INT:IsDirtyRate};IsDirtyAvail=%{INT:IsDirtyAvail};BasePrice=%{NOTSPACE:BasePrice};ExtraPriceAdult=%{NOTSPACE:ExtraPrice};Currency=%{WORD:Currency};Inventory=%{INT:Inventory};Reservations=%{INT:Reservations};\n+%{GREEDYDATA:Request}\n+%{GREEDYDATA:Response}"
}
}
grok {
match => {
path => "%{GREEDYDATA:pp}/%{INT:filedate}_%{INT:fileid}_%{INT:ChannelID}_%{GREEDYDATA:action}_%{INT:isdone}\.bin"
}
}
mutate {
lowercase => [ "action" ]
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["localhost:9200"]
index => "crs-%{SiteID}"
}
}
Logstash:
15:25:10.496 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started
15:25:10.632 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
然后我等待和等待 - 直到我杀死进程:
^C15:28:33.530 [SIGINT handler] WARN logstash.runner - SIGINT received. Shutting down the agent.
15:28:33.544 [LogStash::Runner] WARN logstash.agent - stopping pipeline {:id=>"main"}
{
"date" => "11:48:49",
"pp" => "/home/admin/logs",
"BasePrice" => "270.00",
"filedate" => "115151",
"idnumber" => "106274275",
"IsClosed" => "0",
"type" => "logs",
"path" => "/home/admin/logs/115151_00_3_SavePrice_1.bin",
"RatePlanID" => "13078",
"MaxOccupancy" => "0",
"Currency" => "PLN",
"@version" => "1",
"host" => "xxxx",
"Reservations" => "0",
"action" => "saveprice",
"Calculation" => "N",
"BaseOccupancy" => "0",
"isdone" => "1",
"RatePlanCode" => "975669",
"fileid" => "00",
"MinLOS" => "1",
"SiteID" => "1709",
"RatePlanRoomID" => "61840",
"ExtraPrice" => ";ExtraPriceChild=",
"Request" => "<?xml version=\"1.0\"?><request>xxx",
"ChannelID" => "3",
"done" => "1",
"DaySetupIDs" => "0=39355996",
"IsDirtyRate" => "1",
"tags" => [
[0] "multiline"
],
"Response" => "<ok></ok>",
"IsDirtyAvail" => "1",
"@timestamp" => 2016-12-29T15:28:33.801Z,
"RoomCode" => "28114102",
"DateFrom" => "2016-12-05",
"Inventory" => "3",
"DateTo" => "2016-12-05"
}
答案 0 :(得分:1)
如果您按以下方式更改sincedb_path
该怎么办?
sincedb_path => "/dev/null"
除此之外,你的conf上的file
插件的属性似乎正好点。希望这SO听起来很方便。据我从官方文档中了解到,sincedb_path
只需要是logstash
具有注册表写权限的目录。默认情况下,Logstash会将所有记录保留在 $ HOME / .sincedb * 中。
如果您不想继续使用上述方法,则可以随时清除默认目录中的.sincedb
个文件,然后尝试重新解析logstash
个conf。更有可能logstash
会自动获取更改。
修改强>:
我想我发现了你的问题,你错过了start_position的引号,它应该是这样的:
start_position => "beginning"
答案 1 :(得分:0)
添加auto_flush_interval帮助我
codec => multiline {
auto_flush_interval => 1
pattern => "^[0-2][0-3]:[0-5][0-9].*"
negate => "true"
what => "previous"
}