使用logstash和jdbc更新复杂的嵌套弹性搜索文档

时间:2016-01-19 14:16:40

标签: jdbc elasticsearch logstash

我们假设Oracle Schema具有以下表和列:


    Country
        country_id; (Primary Key)
        country_name;

    Department
        department_id; (Primary Key)
        department_name;
        country_id; (Foreign key to Country:country_id)

    Employee
        employee_id; (Primary Key)
        employee_name;
        department_id; (Foreign key to Department:department_id)

我有我的Elasticsearch文档,其中根元素是一个国家,它包含 该国的所有部门又包含各部门的所有雇员。

所以文档结构如下所示:


    {
      "mappings": {
        "country": {
          "properties": {
            "country_id": { "type": "string"},
            "country_name": { "type": "string"},        
            "department": {
              "type": "nested",
              "properties": {
                "department_id": { "type": "string"},
                "department_name": { "type": "string"},
                "employee": {
                  "type": "nested",
                  "properties": {
                    "employee_id": { "type": "string"},
                    "employee_name": { "type": "string"}
                  }
                }
              }
            }
          }
        }
      }
    }           

我希望能够在每个表上运行单独的输入jdbc查询,并且它们应该创建/更新/删除 每当添加/更新/删除基表中的数据时,弹性搜索文档中的数据。

这是一个示例问题,实际的表和数据结构更复杂。所以我不是在寻找解决方案 限于此。

有没有办法实现这个目标?

感谢。

1 个答案:

答案 0 :(得分:0)

对于第一级,使用aggregate filter直接进行。你需要在它们之间有一个共同的id来引用。

filter {    

  aggregate {
    task_id => "%{id}"

    code => "     
      map['id'] = event.get('id')
      map['department'] ||= []
      map['department'] << event.to_hash.each do |key,value| { key => value } end    
    "
    push_previous_map_as_event => true
    timeout => 150000
    timeout_tags => ['aggregated']    
  } 

   if "aggregated" not in [tags] {
    drop {}
  }
}
  

重要提示:输出操作应该是更新

    output {
        elasticsearch {
            action => "update"
             ...
           }
        }

解决级别2的一种方法是查询已编制索引的文档并使用嵌套记录 进行更新。  再次使用aggregate filter;文档应该有一个公共ID,以便您可以查找并插入到正确的文档中。

filter {    
    #get the document from elastic based on id and store it in 'emp'
    elasticsearch {
            hosts => ["${ELASTICSEARCH_HOST}/${INDEX_NAME}/${INDEX_TYPE}"]
            query => "id:%{id}" 
            fields => { "employee" => "emp" }
         }



  aggregate {
    task_id => "%{id}"  
    code => "       
                map['id'] = event.get('id')
                map['employee'] = []
                employeeArr = []
                temp_emp = {}   

                event.to_hash.each do |key,value|                       
                    temp_emp[key] = value
                end     

                #push the objects into an array
                employeeArr.push(temp_emp)

                empArr = event.get('emp')                   

                for emp in empArr
                    emp['employee'] = employeeArr                       
                    map['employee'].push(emp)
                end
    "
    push_previous_map_as_event => true
    timeout => 150000
    timeout_tags => ['aggregated']

  } 

   if "aggregated" not in [tags] {
    drop {}
  } 

}

output {

elasticsearch {
        action => "update"    #important
         ...
        }
 }  
  

另外,为了调试ruby代码,请在输出中使用以下内容

output{
    stdout { codec => dots }
}