我们最近将Kafka升级到v1.1,将Confluent升级到v4.0。但升级后我们遇到了有关州商店的持续问题。我们的应用程序启动一组流,我们检查状态存储是否准备就绪,然后在100次尝试后终止应用程序。但升级后,至少有一个流将Store is not ready : the state store, <your stream>, may have migrated to another instance
流本身具有RUNNING
状态,消息将流过,但商店的状态仍然显示为未准备好。所以我不知道可能会发生什么。
我们在3个经纪人的集群中运行Kafka.Below是一个示例流(不是整个代码):
public BaseStream createStreamInstance() {
final Serializer<JsonNode> jsonSerializer = new JsonSerializer();
final Deserializer<JsonNode> jsonDeserializer = new JsonDeserializer();
final Serde<JsonNode> jsonSerde = Serdes.serdeFrom(jsonSerializer, jsonDeserializer);
MessagePayLoadParser<Note> noteParser = new MessagePayLoadParser<Note>(Note.class);
GenericJsonSerde<Note> noteSerde = new GenericJsonSerde<Note>(Note.class);
StreamsBuilder builder = new StreamsBuilder();
//below reducer will use sets to combine
//value1 in the reducer is what is already present in the store.
//value2 is the incoming message and for notes should have max 1 item in it's list (since its 1 attachment 1 tag per row, but multiple rows per note)
Reducer<Note> reducer = new Reducer<Note>() {
@Override
public Note apply(Note value1, Note value2) {
value1.merge(value2);
return value1;
}
};
KTable<Long, Note> noteTable = builder
.stream(this.subTopic, Consumed.with(jsonSerde, jsonSerde))
.map(noteParser::parse)
.groupByKey(Serialized.with(Serdes.Long(), noteSerde))
.reduce(reducer);
noteTable.toStream().to(this.pubTopic, Produced.with(Serdes.Long(), noteSerde));
this.stream = new KafkaStreams(builder.build(), this.properties);
return this;
}
答案 0 :(得分:0)
这里有一些未解决的问题,例如马蒂亚斯(Matthias)发表的问题,但会尝试回答/为您的实际问题提供帮助: