oracle金门大数据kafka适配器将数据分组到kafka

时间:2017-10-31 12:05:14

标签: apache-kafka grouping adapter oracle-golden-gate

来源:Oracle数据库 目标:kafka

通过oracle golden适配器将大数据从源移动到目标。问题是数据移动正常,但当插入5条记录时,它将作为主题中的一个文件。

我想把它分组。如果要进行5次插入,我需要在主题(kafka)

中有五个单独的条目

kafka handler,大数据版本gg 12.3.1

我在源代码和khafka中插入了五条记录,正在获取所有插入内容,如下所示

{"table":"MYSCHEMATOPIC.ELASTIC_TEST","op_type":"I","op_ts":"2017-10-24 08:52:01.000000","current_ts":"2017-10-24T12:52:04.960000","pos":"00000000030000001263","after":{"TEST_ID":2,"TEST_NAME":"Francis","TEST_NAME_AR":"Francis"}}
{"table":"MYSCHEMATOPIC.ELASTIC_TEST","op_type":"I","op_ts":"2017-10-24 08:52:01.000000","current_ts":"2017-10-24T12:52:04.961000","pos":"00000000030000001437","after":{"TEST_ID":3,"TEST_NAME":"Ashfak","TEST_NAME_AR":"Ashfak"}}
{"table":"MYSCHEMATOPIC.ELASTIC_TEST","op_type":"U","op_ts":"2017-10-24 08:55:04.000000","current_ts":"2017-10-24T12:55:07.252000","pos":"00000000030000001734","before":{"TEST_ID":null,"TEST_NAME":"Francis"},"after":{"TEST_ID":null,"TEST_NAME":"updatefrancis"}}
{"table":"MYSCHEMATOPIC.ELASTIC_TEST","op_type":"D","op_ts":"2017-10-24 08:56:11.000000","current_ts":"2017-10-24T12:56:14.365000","pos":"00000000030000001865","before":{"TEST_ID":2}}
{"table":"MYSCHEMATOPIC.ELASTIC_TEST","op_type":"U","op_ts":"2017-10-24 08:57:43.000000","current_ts":"2017-10-24T12:57:45.817000","pos":"00000000030000002152","before":{"TEST_ID":3},"after":{"TEST_ID":4}}

2 个答案:

答案 0 :(得分:0)

我建议使用Kafka Connect Handler,因为它随后会使用Confluent Schema Registry注册数据的架构,这样可以更轻松地向前传输到Elasticsearch等目标(使用Kafka Connect)。

在Kafka中,来自Oracle的每条记录都将是一条Kafka消息。

答案 1 :(得分:0)

在.props文件中进行以下

gg.handler.kafkahandler.mode = OP。

它有效!!