我正在尝试使用融合的kafka connect功能收听我发布的主题。但是,我无法反序列化它。我相信它具有非凡的序列化能力,但却找不到正确的解串器。
该消息类似于控制台主题中的以下内容
null {"c1":{"int":10},"c2":{"string":"foo"},"create_ts":1552598863000,"update_ts":1552598863000}
下面是反序列化器
public class AvroDeserializer<T extends SpecificRecordBase> implements Deserializer<T> {
private static final Logger LOGGER = LoggerFactory.getLogger(AvroDeserializer.class);
protected final Class<T> targetType;
public AvroDeserializer(Class<T> targetType) {
this.targetType = targetType;
}
@Override
public void close() {
// No-op
}
@Override
public void configure(Map<String, ?> arg0, boolean arg1) {
// No-op
}
@SuppressWarnings("unchecked")
@Override
public T deserialize(String topic, byte[] data) {
try {
T result = null;
if (data != null) {
LOGGER.debug("data='{}'", DatatypeConverter.printHexBinary(data));
DatumReader<GenericRecord> datumReader =
new SpecificDatumReader<>(targetType.newInstance().getSchema());
Decoder decoder = DecoderFactory.get().binaryDecoder(data, null);
result = (T) datumReader.read(null, decoder);
LOGGER.debug("deserialized data='{}'", result);
}
return result;
} catch (Exception ex) {
throw new SerializationException(
"Can't deserialize data '" + Arrays.toString(data) + "' from topic '" + topic + "'", ex);
}
}
}
例外
org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition mysql-foobar-0 at offset 10. If needed, please seek past the record to continue consumption.
Caused by: org.apache.kafka.common.errors.SerializationException: Can't deserialize data '[0, 0, 0, 0, 21, 2, 20, 2, 6, 102, 111, 111, -80, -78, -44, -31, -81, 90, -80, -78, -44, -31, -81, 90]' from topic 'mysql-foobar'
Caused by: java.lang.InstantiationException: null
at sun.reflect.InstantiationExceptionConstructorAccessorImpl.newInstance(InstantiationExceptionConstructorAccessorImpl.java:48) ~[na:1.8.0_131]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[na:1.8.0_131]
at java.lang.Class.newInstance(Class.java:442) ~[na:1.8.0_131]
at com.spring.kafkaexample.springbootkafkaconsumer.config.AvroDeserializer.deserialize(AvroDeserializer.java:48) ~[classes/:na]
at com.spring.kafkaexample.springbootkafkaconsumer.config.AvroDeserializer.deserialize(AvroDeserializer.java:18) ~[classes/:na]
答案 0 :(得分:0)
就像下面在控制台主题中显示的一样
使用kafka-avro-console-consumer
还是仅使用kafka-console-consumer
尚不清楚。知道您的数据是否为Avro的方法是查看生产者/连接器配置。
不过,无需编写自己的解串器。另外,Confluent不使用代码将遵循的Avro模式+消息约定(因此,为什么会出现该错误)。您需要首先从架构注册表中查找架构。
添加Confluent Maven存储库
<repositories>
<repository>
<id>confluent</id>
<url>https://packages.confluent.io/maven/</url>
</repository>
</repositories>
然后添加Confluent序列化程序依赖项
<dependency>
<groupId>io.confluent</groupId>
<artifactId>kafka-avro-serializer</artifactId>
<version>${confluent.version}</version>
</dependency>
然后import io.confluent.kafka.serializers.KafkaAvroDeserializer
,或在使用者配置中使用该类
https://docs.confluent.io/current/clients/install.html#java
OR ,您可以将MySQL连接器切换为不使用Avro Converter