Log-stash: - 按字段值过滤并存储在ES索引中

时间:2017-06-02 06:40:51

标签: shell elasticsearch logstash kibana

我是ES和Log-Stash的新手,并尝试将日志文件json数据导入ES索引,然后我想使用kibana创建dashbaord。

在这里,我有一个44GB左右的大文件,并且它日益增加。

Q1: - 我们可以将这么多大文件加载到elasticsearch吗?

以下是日志文件的示例数据。

    Jun 1 17: 12: 18 10.10 .125 .148 2017 - 06 - 01 T11: 42: 28 Z 352019 b8 - 0 d2d - 4397 - 446 a - 98 fabeddf3bf doppler[19]: {
    "cf_app_id": "a4d311b3-f756-4d5e-bc3d-03690d461443",
    "cf_app_name": "parkingapp",
    "cf_ignored_app": false,
    "cf_org_id": "c5803a97-d696-497e-a0a4-112117eefab1",
    "cf_org_name": "KPIT",
    "cf_origin": "firehose",
    "cf_space_id": "886f2158-6b8a-4079-a6e1-7aa52034400d",
    "cf_space_name": "Development",
    "cpu_percentage": 0.022689683212221975,
    "deployment": "cf",
    "disk_bytes": 86257664,
    "disk_bytes_quota": 1073741824,
    "event_type": "ContainerMetric",
    "instance_index": 0,
    "ip": "10.10.125.113",
    "job": "diego_cell",
    "job_index": "356614b9-b079-4cc7-bcf9-4f61ab7924d0",
    "level": "info",
    "memory_bytes": 89395200,
    "memory_bytes_quota": 536870912,
    "msg": "",
    "origin": "rep",
    "time": "2017-06-01T11:42:28Z"
}
Jun 1 17: 12: 18 10.10 .125 .148 2017 - 06 - 01 T11: 42: 28 Z 352019 b8 - 0 d2d - 4397 - 446 a - 98 fabeddf3bf doppler[19]: {
    "cf_app_id": "3a83fdf4-a69a-45ca-8537-f7916c79dbbb",
    "cf_app_name": "spring-cloud-broker",
    "cf_ignored_app": false,
    "cf_org_id": "13233503-5430-4372-942c-02147ac34c38",
    "cf_org_name": "system",
    "cf_origin": "firehose",
    "cf_space_id": "1f40ca9a-ca34-434b-aa17-82ed87657a6e",
    "cf_space_name": "p-spring-cloud-services",
    "cpu_percentage": 0.0955028907326772,
    "deployment": "cf",
    "disk_bytes": 188231680,
    "disk_bytes_quota": 1073741824,
    "event_type": "ContainerMetric",
    "instance_index": 0,
    "ip": "10.10.125.113",
    "job": "diego_cell",
    "job_index": "356614b9-b079-4cc7-bcf9-4f61ab7924d0",
    "level": "info",
    "memory_bytes": 641343488,
    "memory_bytes_quota": 1073741824,
    "msg": "",
    "origin": "rep",
    "time": "2017-06-01T11:42:28Z"
}
Jun 1 17: 12: 18 10.10 .125 .148 2017 - 06 - 01 T11: 42: 28 Z 352019 b8 - 0 d2d - 4397 - 446 a - 98 fabeddf3bf doppler[19]: {
    "cf_app_id": "37acc229-844a-4ed3-ab54-5149ffab5b5b",
    "cf_app_name": "apps-manager-js",
    "cf_ignored_app": false,
    "cf_org_id": "13233503-5430-4372-942c-02147ac34c38",
    "cf_org_name": "system",
    "cf_origin": "firehose",
    "cf_space_id": "0ba61523-6a76-4d37-a0cd-a0117454a6eb",
    "cf_space_name": "system",
    "cpu_percentage": 0.04955433122879798,
    "deployment": "cf",
    "disk_bytes": 10235904,
    "disk_bytes_quota": 107374182 4,
    "event_type": "ContainerMetric",
    "instance_index": 5,
    "ip": "10.10.125.113",
    "job": "diego_cell",
    "job_index": "356614b9-b079-4cc7-bcf9-4f61ab7924d0",
    "level": "info",
    "memory_bytes": 6307840,
    "memory_bytes_quota": 67108864,
    "msg": "",
    "origin": "rep",
    "time": "2017-06-01T11:42:28Z"
}

您可以看到日志不是完全JSON格式。

此处Logs包含cf_app_name,使用以下命令我从日志中获得唯一的cf_app_name并将该输出存储到另一个文件中。

grep -Po '"cf_app_name":.*?[^\\]"' /var/log/messages | cut -d ':' -f2 > applications.txt

然后我使用cf_app_name在elastisearch中创建索引,方法是使用下面的脚本读取applications.txt文件

tr '[A-Z]' '[a-z]' < applications.txt > apps_name.txt

while IFS='"' read -ra arr;

do
        for i in "${arr[@]}" ; do

        name="$i"

CURL_COMMAND=`curl -XPUT 'localhost:9200/'$name'?pretty'`

echo $CURL_COMMAND
done

done </root/apps_name.txt

我在Elastic Search中成功创建了数千个索引。

现在我想做的是, 根据cf_app_name将这些日志加载到Elastic Search索引中。

这意味着所有日志都应根据其cf_app_name存储到受尊重的索引中。

Q2:logstash是最好的解决方案吗?如果是,请提供您有价值的建议来实现这一目标。

谢谢你, 兔子。

0 个答案:

没有答案