我试图按照Debezium博客here中描述的方法,使用Debezium将上游数据库中的表同步到下游数据库。
在下游表中,我只需要上游表中的某些列。我还希望更改某些列名称(包括主键的名称)。 如果我不尝试重命名主键,则同步将正常进行。
我正在使用:
我在下面列出了我的数据库和连接器设置的全部详细信息。
(1)数据库表定义:
上游表的DDL为:
CREATE TABLE [kafkatest.service1].dbo.Users (
Id int IDENTITY(1,1) NOT NULL,
Name nvarchar COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
CONSTRAINT PK_Users PRIMARY KEY (Id)
) GO
下游表的DDL为:
CREATE TABLE [kafkatest.service2].dbo.Users (
LocalId int IDENTITY(1,1) NOT NULL, // added to avoid IDENTITY_INSERT issue with SQL Server
ExternalId int NOT NULL,
ExternalName nvarchar COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
CONSTRAINT PK_Users PRIMARY KEY (LocalId)
) GO
尤其要注意,上游表中的'Id'列(这是主键)应映射到'ExternalId'< 下游表中的/ strong>列。
(2)Kafka Connect连接器定义:
源连接器:
{
"name": "users-connector",
"config": {
"connector.class": "io.debezium.connector.sqlserver.SqlServerConnector",
"tasks.max": "1",
"database.server.name": "sqlserver",
"database.hostname": "sqlserver",
"database.port": "1433",
"database.user": "sa",
"database.password": "Password!",
"database.dbname": "kafkatest.service1",
"database.history.kafka.bootstrap.servers": "kafka:9092",
"database.history.kafka.topic": "schema-changes.users",
"table.whitelist": "dbo.Users"
}
}
水槽连接器:
{
"name": "jdbc-sink",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"tasks.max": "1",
"topics.regex": "sqlserver\\.dbo\\.(Users)",
"connection.url": "jdbc:sqlserver://sqlserver:1433;databaseName=kafkatest.service2",
"connection.user": "sa",
"connection.password": "Password!",
"transforms": "unwrap,route,RenameField",
"transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
"transforms.unwrap.drop.tombstones": "false",
"transforms.route.type": "org.apache.kafka.connect.transforms.RegexRouter",
"transforms.route.regex": "(?:[^.]+)\\.(?:[^.]+)\\.([^.]+)",
"transforms.route.replacement": "$1",
"transforms.RenameField.type": "org.apache.kafka.connect.transforms.ReplaceField$Value",
"transforms.RenameField.renames": "Id:ExternalId,Name:ExternalName",
"auto.create": "false",
"auto.evolve": "false",
"insert.mode": "upsert",
"delete.enabled": "true",
"pk.fields": "Id",
"pk.mode": "record_key"
}
}
据我所知,“ pk.mode”必须为“ record_key”才能启用删除。我尝试将“ pk.fields”值设置为“ Id”和“ ExternalId”,但均无效。
(3)错误消息:
在第一种情况下(即“ pk.fields”:“ Id”),出现以下错误:
2020-08-18 10:16:16,951 INFO || Unable to find fields [SinkRecordField{schema=Schema{INT32}, name='Id', isPrimaryKey=true}] among column names [ExternalId, ExternalName, LocalId] [io.confluent.connect.jdbc.sink.DbStructure]
2020-08-18 10:16:16,952 ERROR || WorkerSinkTask{id=jdbc-sink-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. Error: Cannot ALTER TABLE "Users" to add missing field SinkRecordField{schema=Schema{INT32}, name='Id', isPrimaryKey=true}, as the field is not optional and does not have a default value [org.apache.kafka.connect.runtime.WorkerSinkTask]
org.apache.kafka.connect.errors.ConnectException: Cannot ALTER TABLE "Users" to add missing field SinkRecordField{schema=Schema{INT32}, name='Id', isPrimaryKey=true}, as the field is not optional and does not have a default value
在第二种情况下(即“ pk.fields”:“ ExternalId”),出现以下错误:
2020-08-18 10:17:50,192 ERROR || WorkerSinkTask{id=jdbc-sink-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. Error: PK mode for table 'Users' is RECORD_KEY with configured PK fields [ExternalId], but record key schema does not contain field: ExternalId [org.apache.kafka.connect.runtime.WorkerSinkTask]
org.apache.kafka.connect.errors.ConnectException: PK mode for table 'Users' is RECORD_KEY with configured PK fields [ExternalId], but record key schema does not contain field: ExternalId
使用Debezium时是否可以重命名主键?还是我总是需要对数据库表进行结构设计,以使上游和下游数据库中的主键名称都匹配?
答案 0 :(得分:1)
尝试重命名关键字段:
"transforms": "unwrap,route,RenameField,RenameKey",
...
"transforms.RenameKey.type": "org.apache.kafka.connect.transforms.ReplaceField$Key",
"transforms.RenameKey.renames": "Id:ExternalId",
使用"pk.mode": "record_key"
时,主键from the message key用于build the upsert query statement。