我正在使用Debezium SQL Server连接器将表流式传输为主题。感谢Debezium的ExtractNewRecordState
SMT,我在主题中收到以下消息。
{
"schema":{
"type":"struct",
"fields":[
{
"type":"int64",
"optional":false,
"field":"id"
},
{
"type":"string",
"optional":false,
"field":"customer_code"
},
{
"type":"string",
"optional":false,
"field":"topic_name"
},
{
"type":"string",
"optional":true,
"field":"payload_key"
},
{
"type":"boolean",
"optional":false,
"field":"is_ordered"
},
{
"type":"string",
"optional":true,
"field":"headers"
},
{
"type":"string",
"optional":false,
"field":"payload"
},
{
"type":"int64",
"optional":false,
"name":"io.debezium.time.Timestamp",
"version":1,
"field":"created_on"
}
],
"optional":false,
"name":"test_server.dbo.kafka_event.Value"
},
"payload":{
"id":129,
"customer_code":"DVTPRDFT411",
"topic_name":"DVTPRDFT411",
"payload_key":null,
"is_ordered":false,
"headers":"{\"kafka_timestamp\":1594566354199}",
"payload":"MSG 18",
"created_on":1594595154267
}
}
添加value.converter.schemas.enable=false
之后,我可以摆脱schema
部分,只剩下payload
部分,如下所示。
{
"id":130,
"customer_code":"DVTPRDFT411",
"topic_name":"DVTPRDFT411",
"payload_key":null,
"is_ordered":false,
"headers":"{\"kafka_timestamp\":1594566354199}",
"payload":"MSG 19",
"created_on":1594595154280
}
我想再走一步,只提取customer_code
字段。我尝试了ExtractField$Value
SMT,但始终收到异常IllegalArgumentException: Unknown field: customer_code
。
我的配置如下
transforms=unwrap,extract
transforms.unwrap.type=io.debezium.transforms.ExtractNewRecordState
transforms.unwrap.drop.tombstones=true
transforms.unwrap.delete.handling.mode=drop
transforms.extract.type=org.apache.kafka.connect.transforms.ExtractField$Key
transforms.extract.field=customer_code
我尝试了许多其他SMT,包括ExtractField$Key
,ValueToKey
,但是我无法使其正常工作。如果您能告诉我我做错了什么,我将不胜感激。根据Confluent的tutorial,它应该可以工作,但不能。
**更新**
我正在使用connect-standalone worker.properties sqlserver.properties
运行Kafka Connect。
worker.properties
offset.storage.file.filename=C:/development/kafka_2.12-2.5.0/data/kafka/connect/connect.offsets
plugin.path=C:/development/kafka_2.12-2.5.0/plugins
bootstrap.servers=127.0.0.1:9092
offset.flush.interval.ms=10000
rest.port=10082
rest.host.name=127.0.0.1
rest.advertised.port=10082
rest.advertised.host.name=127.0.0.1
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=false
sqlserver.properties
name=sql-server-connector
connector.class=io.debezium.connector.sqlserver.SqlServerConnector
database.hostname=127.0.0.1
database.port=1433
database.user=sa
database.password=dummypassword
database.dbname=STGCTR
database.history.kafka.bootstrap.servers=127.0.0.1:9092
database.server.name=wfo
table.whitelist=dbo.kafka_event
database.history.kafka.topic=db_schema_history
transforms=unwrap,extract
transforms.unwrap.type=io.debezium.transforms.ExtractNewRecordState
transforms.unwrap.drop.tombstones=true
transforms.unwrap.delete.handling.mode=drop
transforms.extract.type=org.apache.kafka.connect.transforms.ExtractField$Value
transforms.extract.field=customer_code
答案 0 :(得分:1)
schema
和payload
字段听起来就像您正在使用通过启用了模式的JsonConverter序列化的数据一样。
您只需设置value.converter.schemas.enable=false
即可实现目标。