kafka流例外找不到org.apache.kafka.common.serialization.Serdes $ WrapperSerde的公共无参数构造函数

时间:2018-06-27 12:39:34

标签: java apache-kafka apache-kafka-streams

在使用kafka流时获取以下错误堆栈跟踪信息

更新:按照@ matthias-j-sax的要求,我已经为Serdes的默认构造函数实现了我自己的WrapperSerde,但仍然收到以下异常

org.apache.kafka.streams.errors.StreamsException: stream-thread [streams-request-count-4c239508-6abe-4901-bd56-d53987494770-StreamThread-1] Failed to rebalance.
    at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests (StreamThread.java:836)
    at org.apache.kafka.streams.processor.internals.StreamThread.runOnce (StreamThread.java:784)
    at org.apache.kafka.streams.processor.internals.StreamThread.runLoop (StreamThread.java:750)
    at org.apache.kafka.streams.processor.internals.StreamThread.run (StreamThread.java:720)
Caused by: org.apache.kafka.streams.errors.StreamsException: Failed to configure value serde class myapps.serializer.Serdes$WrapperSerde
    at org.apache.kafka.streams.StreamsConfig.defaultValueSerde (StreamsConfig.java:972)
    at org.apache.kafka.streams.processor.internals.AbstractProcessorContext.<init> (AbstractProcessorContext.java:59)
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.<init> (ProcessorContextImpl.java:42)
    at org.apache.kafka.streams.processor.internals.StreamTask.<init> (StreamTask.java:136)
    at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask (StreamThread.java:405)
    at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask (StreamThread.java:369)
    at org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.createTasks (StreamThread.java:354)
    at org.apache.kafka.streams.processor.internals.TaskManager.addStreamTasks (TaskManager.java:148)
    at org.apache.kafka.streams.processor.internals.TaskManager.createTasks (TaskManager.java:107)
    at org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsAssigned (StreamThread.java:260)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete (ConsumerCoordinator.java:259)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded (AbstractCoordinator.java:367)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup (AbstractCoordinator.java:316)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll (ConsumerCoordinator.java:290)
    at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce (KafkaConsumer.java:1149)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll (KafkaConsumer.java:1115)
    at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests (StreamThread.java:827)
    at org.apache.kafka.streams.processor.internals.StreamThread.runOnce (StreamThread.java:784)
    at org.apache.kafka.streams.processor.internals.StreamThread.runLoop (StreamThread.java:750)
    at org.apache.kafka.streams.processor.internals.StreamThread.run (StreamThread.java:720)
Caused by: java.lang.NullPointerException
    at myapps.serializer.Serdes$WrapperSerde.configure (Serdes.java:30)
    at org.apache.kafka.streams.StreamsConfig.defaultValueSerde (StreamsConfig.java:968)
    at org.apache.kafka.streams.processor.internals.AbstractProcessorContext.<init> (AbstractProcessorContext.java:59)
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.<init> (ProcessorContextImpl.java:42)
    at org.apache.kafka.streams.processor.internals.StreamTask.<init> (StreamTask.java:136)
    at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask (StreamThread.java:405)
    at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask (StreamThread.java:369)
    at org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.createTasks (StreamThread.java:354)
    at org.apache.kafka.streams.processor.internals.TaskManager.addStreamTasks (TaskManager.java:148)
    at org.apache.kafka.streams.processor.internals.TaskManager.createTasks (TaskManager.java:107)
    at org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsAssigned (StreamThread.java:260)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete (ConsumerCoordinator.java:259)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded (AbstractCoordinator.java:367)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup (AbstractCoordinator.java:316)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll (ConsumerCoordinator.java:290)
    at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce (KafkaConsumer.java:1149)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll (KafkaConsumer.java:1115)
    at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests (StreamThread.java:827)
    at org.apache.kafka.streams.processor.internals.StreamThread.runOnce (StreamThread.java:784)
    at org.apache.kafka.streams.processor.internals.StreamThread.runLoop (StreamThread.java:750)
    at org.apache.kafka.streams.processor.internals.StreamThread.run (StreamThread.java:720)

这是我的用例:

我将得到json响应作为流的输入,我要计算状态码不是200的请求。最初,我在正式文档中以及合并时浏览了kafka流的文档,然后进行了{{1 }}工作得很好,然后我尝试编写此代码,但遇到此异常,我对kafka流非常陌生,我经历了堆栈跟踪,但无法理解上下文,因此来到这里寻求帮助! !!

这是我的代码

WordCountDemo

LogCount.java

package myapps; import java.util.Properties; import java.util.concurrent.CountDownLatch; import org.apache.kafka.common.serialization.Serde; import org.apache.kafka.common.serialization.Serdes; import org.apache.kafka.streams.KafkaStreams; import org.apache.kafka.streams.StreamsBuilder; import org.apache.kafka.streams.StreamsConfig; import org.apache.kafka.streams.Topology; import org.apache.kafka.streams.kstream.KStream; import org.apache.kafka.streams.kstream.Produced; import myapps.serializer.JsonDeserializer; import myapps.serializer.JsonSerializer; import myapps.Request; public class LogCount { public static void main(String[] args) { Properties props = new Properties(); props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-request-count"); props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092"); JsonSerializer<Request> requestJsonSerializer = new JsonSerializer<>(); JsonDeserializer<Request> requestJsonDeserializer = new JsonDeserializer<>(Request.class); Serde<Request> requestSerde = Serdes.serdeFrom(requestJsonSerializer, requestJsonDeserializer); props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName()); props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, requestSerde.getClass().getName()); final StreamsBuilder builder = new StreamsBuilder(); KStream<String, Request> source = builder.stream("streams-requests-input"); source.filter((k, v) -> v.getHttpStatusCode() != 200) .groupByKey() .count() .toStream() .to("streams-requests-output", Produced.with(Serdes.String(), Serdes.Long())); final Topology topology = builder.build(); final KafkaStreams streams = new KafkaStreams(topology, props); final CountDownLatch latch = new CountDownLatch(1); System.out.println(topology.describe()); // attach shutdown handler to catch control-c Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") { @Override public void run() { streams.close(); latch.countDown(); } }); try { streams.cleanUp(); streams.start(); latch.await(); } catch (Throwable e) { System.exit(1); } System.exit(0); } }

JsonDeserializer.java

package myapps.serializer; import com.google.gson.Gson; import org.apache.kafka.common.serialization.Deserializer; import java.util.Map; public class JsonDeserializer<T> implements Deserializer<T> { private Gson gson = new Gson(); private Class<T> deserializedClass; public JsonDeserializer(Class<T> deserializedClass) { this.deserializedClass = deserializedClass; } public JsonDeserializer() { } @Override @SuppressWarnings("unchecked") public void configure(Map<String, ?> map, boolean b) { if(deserializedClass == null) { deserializedClass = (Class<T>) map.get("serializedClass"); } } @Override public T deserialize(String s, byte[] bytes) { if(bytes == null){ return null; } return gson.fromJson(new String(bytes),deserializedClass); } @Override public void close() { } }

JsonSerializer.java

正如我所提到的,我将以JSON作为输入,结构如下:

{
    “ RequestID”:“ 1f6b2409”,     “ Protocol”:“ http”,     “主机”:“ abc.com”,     “ Method”:“ GET”,     “ HTTPStatusCode”:“ 200”,     “ User-Agent”:“ curl%2f7.54.0”, }

相应的package myapps.serializer; import com.google.gson.Gson; import org.apache.kafka.common.serialization.Serializer; import java.nio.charset.Charset; import java.util.Map; public class JsonSerializer<T> implements Serializer<T> { private Gson gson = new Gson(); @Override public void configure(Map<String, ?> map, boolean b) { } @Override public byte[] serialize(String topic, T t) { return gson.toJson(t).getBytes(Charset.forName("UTF-8")); } @Override public void close() { } } 文件看起来像这样

Request.java

编辑:当我从package myapps; public final class Request { private String requestID; private String protocol; private String host; private String method; private int httpStatusCode; private String userAgent; public String getRequestID() { return requestID; } public void setRequestID(String requestID) { this.requestID = requestID; } public String getProtocol() { return protocol; } public void setProtocol(String protocol) { this.protocol = protocol; } public String getHost() { return host; } public void setHost(String host) { this.host = host; } public String getMethod() { return method; } public void setMethod(String method) { this.method = method; } public int getHttpStatusCode() { return httpStatusCode; } public void setHttpStatusCode(int httpStatusCode) { this.httpStatusCode = httpStatusCode; } public String getUserAgent() { return userAgent; } public void setUserAgent(String userAgent) { this.userAgent = userAgent; } } 退出时,是说kafka-console-consumer.sh

3 个答案:

答案 0 :(得分:1)

如错误所示,一个类缺少Serdes$WrapperSerde的非参数默认构造函数:

Could not find a public no-argument constructor 

问题在于这种结构:

Serde<Request> requestSerde = Serdes.serdeFrom(requestJsonSerializer, requestJsonDeserializer);
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, requestSerde.getClass().getName());

Serdes.serdeFrom返回WrapperSerde,它没有一个空的默认构造函数。因此,您无法将其传递到StreamsConfig中。仅在将对象传递到相应的API调用中(例如,为某些运算符覆盖默认的Serdes)时,才可以使用Serde生成。

要使其正常运行(即能够在配置中设置Serde),您需要实现一个实现Serde接口的适当类。

答案 1 :(得分:0)

requestSerde.getClass().getName()对我不起作用。我需要在内部类中提供自己的WrapperSerde实现。您可能需要对以下内容执行相同的操作:

public class MySerde extends WrapperSerde<Request> {
    public MySerde () {
        super(requestJsonSerializer, requestJsonDeserializer);
    }
}

答案 2 :(得分:0)

不是在属性中指定,而是在流创建中添加自定义 serde

 KStream<String, Request> source = builder.stream("streams-requests-input",Consumed.with(Serdes.String(), requestSerde));