Kafka-Connect S3源连接器配置问题

时间:2020-07-05 15:53:40

标签: amazon-web-services amazon-s3 apache-kafka apache-kafka-connect

我已经使用kafka-connect s3接收器连接器将某个主题(例如my.topic)的一些avro消息上传到了一个亚马逊s3存储桶(例如s3-bucket)中。接收器连接器的配置如下:

{
        "connector.class": "io.confluent.connect.s3.S3SinkConnector",
        "key.converter": "org.apache.kafka.connect.converters.LongConverter",
        "value.converter": "io.confluent.connect.avro.AvroConverter",
        "value.converter.schema.registry.url": "http://schemaregistry:8099",
        "value.converter.value.subject.name.strategy": "io.confluent.kafka.serializers.subject.TopicRecordNameStrategy",
        "tasks.max": "1",
        "topics": "my.topic",
        "s3.region": "eu-west-2",
        "s3.bucket.name": "s3-bucket",
        "flush.size": "5",
        "storage.class": "io.confluent.connect.s3.storage.S3Storage",
        "format.class": "io.confluent.connect.s3.format.avro.AvroFormat",
        "schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
        "schema.compatibility": "NONE",
        "partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner"
}

这可以按预期工作,所有消息都是具有相同架构版本的相同记录,我在主题中写入5并在存储桶中看到一个带有路径的s3对象

/topics/my.topic/partition=0/my.topic+0+0000000000.avro

现在,我要将这些存储的消息放到另一个空主题上。我使用以下配置启动s3源连接器:

{
        "confluent.topic.bootstrap.servers": "kafka:9092",
        "confluent.topic.replication.factor": 1,
        "connector.class": "io.confluent.connect.s3.source.S3SourceConnector",
        "s3.region": "eu-west-2",
        "s3.bucket.name": "s3-bucket",
        "format.class": "io.confluent.connect.s3.format.avro.AvroFormat",
        "partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
        "transforms": "AddPrefix",
        "transforms.AddPrefix.type": "org.apache.kafka.connect.transforms.RegexRouter",
        "transforms.AddPrefix.regex": ".*",
        "transforms.AddPrefix.replacement": "recovery_$0"
}

当我查看由kafka-connect生成的日志(在docker容器中运行)时,它看起来很高兴,没有任何错误,它可以正确识别我的存储桶,并且其中的目录路径已分配给观看

/topics/my.topic/partition=0/

但是,它永远不会检测到内部文件,也不会向预期的recovery_my.topic主题写入任何内容。它反复记录

kafka-connect         | [2020-07-05 15:31:46,311] INFO PartitionCheckingTask - Checking if Partitions have changed. (io.confluent.connect.cloud.storage.source.util.PartitionCheckingTask)
kafka-connect         | [2020-07-05 15:31:47,963] INFO WorkerSourceTask{id=tx-s3-restore-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask)
kafka-connect         | [2020-07-05 15:31:47,964] INFO WorkerSourceTask{id=tx-s3-restore-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask)
kafka-connect         | [2020-07-05 15:31:50,483] INFO AvroDataConfig values: 
kafka-connect         |     schemas.cache.config = 50
kafka-connect         |     enhanced.avro.schema.support = false
kafka-connect         |     connect.meta.data = true
kafka-connect         |  (io.confluent.connect.avro.AvroDataConfig)
kafka-connect         | [2020-07-05 15:31:50,483] INFO AvroDataConfig values: 
kafka-connect         |     schemas.cache.config = 50
kafka-connect         |     enhanced.avro.schema.support = false
kafka-connect         |     connect.meta.data = true
kafka-connect         |  (io.confluent.connect.avro.AvroDataConfig)
kafka-connect         | [2020-07-05 15:31:50,537] INFO AvroDataConfig values: 
kafka-connect         |     schemas.cache.config = 50
kafka-connect         |     enhanced.avro.schema.support = false
kafka-connect         |     connect.meta.data = true
kafka-connect         |  (io.confluent.connect.avro.AvroDataConfig)
kafka-connect         | [2020-07-05 15:31:50,589] INFO No new files ready after scan task assigned folders (io.confluent.connect.cloud.storage.source.StorageSourceTask)

这向我表明它由于某种原因而忽略了该文件?这是从日志中提取的完整s3源连接器配置

kafka-connect         | [2020-07-05 15:10:49,427] INFO S3SourceConnectorConfig values: 
kafka-connect         |     behavior.on.error = fail
kafka-connect         |     confluent.license = 
kafka-connect         |     confluent.topic = _confluent-command
kafka-connect         |     confluent.topic.bootstrap.servers = [kafka:9092]
kafka-connect         |     confluent.topic.replication.factor = 1
kafka-connect         |     directory.delim = /
kafka-connect         |     filename.regex = (.+)\+(\d+)\+.+$
kafka-connect         |     folders = [topics/my.topic/partition=0/]
kafka-connect         |     format.bytearray.extension = .bin
kafka-connect         |     format.bytearray.separator = 
kafka-connect         |     format.class = class io.confluent.connect.s3.format.avro.AvroFormat
kafka-connect         |     partition.field.name = []
kafka-connect         |     partitioner.class = class io.confluent.connect.storage.partitioner.DefaultPartitioner
kafka-connect         |     path.format = 
kafka-connect         |     record.batch.max.size = 200
kafka-connect         |     s3.bucket.name = s3-bucket
kafka-connect         |     s3.credentials.provider.class = class com.amazonaws.auth.DefaultAWSCredentialsProviderChain
kafka-connect         |     s3.http.send.expect.continue = true
kafka-connect         |     s3.part.retries = 3
kafka-connect         |     s3.poll.interval.ms = 60000
kafka-connect         |     s3.proxy.password = [hidden]
kafka-connect         |     s3.proxy.url = 
kafka-connect         |     s3.proxy.user = null
kafka-connect         |     s3.region = eu-west-2
kafka-connect         |     s3.retry.backoff.ms = 200
kafka-connect         |     s3.sse.customer.key = [hidden]
kafka-connect         |     s3.ssea.name = 
kafka-connect         |     s3.wan.mode = false
kafka-connect         |     schema.cache.size = 50
kafka-connect         |     store.url = null
kafka-connect         |     topics.dir = topics
kafka-connect         |  (io.confluent.connect.s3.source.S3SourceConnectorConfig)
kafka-connect         | [2020-07-05 15:10:49,428] INFO [Producer clientId=connector-producer-tx-s3-restore-0] Cluster ID: nlQYzBVYRbWozKk54-Qx_A (org.apache.kafka.clients.Metadata)
kafka-connect         | [2020-07-05 15:10:49,432] INFO AvroDataConfig values: 
kafka-connect         |     schemas.cache.config = 50
kafka-connect         |     enhanced.avro.schema.support = false
kafka-connect         |     connect.meta.data = true
kafka-connect         |  (io.confluent.connect.avro.AvroDataConfig)
kafka-connect         | [2020-07-05 15:10:49,434] INFO Starting source connector task with assigned folders [topics/my.topic/partition=0/] using partitioner io.confluent.connect.storage.partitioner.DefaultPartitioner (io.confluent.connect.cloud.storage.source.StorageSourceTask)

如果有人对为什么忽略我的文件有任何想法,我将不胜感激。

1 个答案:

答案 0 :(得分:0)

由于 confluent s3 源代码连接器不是开源的并且需要许可证,因此您需要将试用期为 30 天的 confluent 许可证添加到您的源连接器配置中:

"confluent.license": ""

我尝试过我的用例并且它正在工作。