将ByteArrayFormat与使用RecordField提取的TimeBasedPartitioner一起使用

时间:2019-05-14 01:38:21

标签: apache-kafka-connect confluent

我正尝试使用TimeBasedPartitioner通过以下配置提取RecordField

{
    "name": "s3-sink",
    "connector.class": "io.confluent.connect.s3.S3SinkConnector",
    "tasks.max": "10",
    "topics": "topics1.topics2",
    "s3.region": "us-east-1",
    "s3.bucket.name": "bucket",
    "s3.part.size": "5242880",
    "s3.compression.type": "gzip",
    "timezone": "UTC",
    "rotate.schedule.interval.ms": "900000",
    "flush.size": "1000000",
    "schema.compatibility": "NONE",
    "storage.class": "io.confluent.connect.s3.storage.S3Storage",
    "format.class": "io.confluent.connect.s3.format.bytearray.ByteArrayFormat",
    "partitioner.class": "io.confluent.connect.storage.partitioner.HourlyPartitioner",
    "partition.duration.ms": "900000",
    "locale": "en",
    "timestamp.extractor": "RecordField",
    "timestamp.field": "time",
    "key.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
    "key.converter.schemas.enabled": false,
    "value.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
    "value.converter.schemas.enabled": false,
    "interal.key.converter": "org.apache.kafka.connect.json.JsonConverter",
    "internal.key.converter.schemas.enabled": false,
    "interal.value.converter": "org.apache.kafka.connect.json.JsonConverter",
    "internal.value.converter.schemas.enabled": false,
}

我一直收到以下错误,但找不到太多可以解释发生了什么情况。我查看了源代码,似乎记录不是Struct或Map类型,所以我想知道使用ByteArrayFormat是否存在问题?

org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
    at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:546)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:302)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:205)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:173)
    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: io.confluent.connect.storage.errors.PartitionException: Error encoding partition.
    at io.confluent.connect.storage.partitioner.TimeBasedPartitioner$RecordFieldTimestampExtractor.extract(TimeBasedPartitioner.java:294)
    at io.confluent.connect.s3.TopicPartitionWriter.executeState(TopicPartitionWriter.java:199)
    at io.confluent.connect.s3.TopicPartitionWriter.write(TopicPartitionWriter.java:176)
    at io.confluent.connect.s3.S3SinkTask.put(S3SinkTask.java:195)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:524)

我已经能够使用默认的分区程序将其注销。

0 个答案:

没有答案