LogStash过滤器 - 聚合重复事件并合并它

时间:2016-03-01 00:32:22

标签: ruby logstash

鉴于这三个LogStash事件,一个ruby哈希

{
    "id" => "1",
    "address_unit" => "1",
    "address_sqft" => 1098,
    "address_city" => "NEW YORK"
}

{
    "id" => "1",
    "address_unit" => "1",
    "address_sqft" => 1090,
    "address_city" => "NEW YORK"
}


{
    "id" => "1",
    "address_unit" => "1",
    "address_sqft" => 2000,
    "address_city" => "NEW YORK"
}

应将其存储在ES中

{
    "id" => "1",
    "address_unit" => "1",
    "address_sqft" => [1098, 1090, 2000],
    "address_city" => "NEW YORK"
}

我想知道是否有一些内置/社区过滤器来实现它,或者如何使用ruby代码实现它。

谢谢!

1 个答案:

答案 0 :(得分:1)

如果您的哈希值类似于ab,则可以使用

[15] pry(main)> a
=> {"id"=>"1", "address_unit"=>"1", "address_sqft"=>1098, "address_city"=>"NEW YORK"}
[16] pry(main)> b
=> {"id"=>"1", "address_unit"=>"1", "address_sqft"=>1090, "address_city"=>"NEW YORK"}
[17] pry(main)> a.merge(b) {|_, old_val, new_val| old_val==new_val ? old_val :  ([old_val] << new_val).flatten}
=> {"id"=>"1", "address_unit"=>"1", "address_sqft"=>[1098, 1090], "address_city"=>"NEW YORK"}