当我尝试从流访问状态立体声时,出现以下错误,
状态存储(计数存储)可能已迁移到另一个实例
当我尝试从商店访问ReadOnlyKeyValueStore时,将erorr消息迁移到其他服务器。但是只有一个经纪人已经启动并运行
/**
*
*/
package com.ms.kafka.com.ms.stream;
import java.util.Properties;
import java.util.stream.Stream;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.errors.InvalidStateStoreException;
import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.state.QueryableStoreType;
import org.apache.kafka.streams.state.QueryableStoreTypes;
import org.apache.kafka.streams.state.ReadOnlyKeyValueStore;
import com.ms.kafka.com.ms.entity.TrackingEvent;
import com.ms.kafka.com.ms.entity.TrackingEventDeserializer;
import com.ms.kafka.com.ms.entity.TrackingEvnetSerializer;
/**
* @author vettri
*
*/
public class EventStreamer {
/**
*
*/
public EventStreamer() {
// TODO Auto-generated constructor stub
}
public static void main(String[] args) throws InterruptedException {
Properties props = new Properties();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "trackeventstream_stream");
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(StreamsConfig.CLIENT_ID_CONFIG,"testappdi");
props.put("auto.offset.reset","earliest");
/*
* props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG,
* Serdes.String().getClass());
* props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG,
* Serdes.String().getClass());
*/
final StreamsBuilder builder = new StreamsBuilder();
final KStream<String , TrackingEvent> eventStream = builder.stream("rt_event_command_topic_stream",Consumed.with(Serdes.String(),
Serdes.serdeFrom(new TrackingEvnetSerializer(), new TrackingEventDeserializer())));
KTable<String, Long> groupedByUniqueId = eventStream.groupBy((k,v) -> v.getUniqueid()).
count(Materialized.as("count-store"));
/*
* KTable<Integer, Integer> table = builder.table( "rt_event_topic_stream",
* Materialized.as("queryable-store-name"));
*/
//eventStream.filter((k,v) -> "9de3b676-b20f-4b7a-878b-526fd5948a34".equalsIgnoreCase(v.getUniqueid())).foreach((k,v) -> System.out.println(v));
final KafkaStreams stream = new KafkaStreams(builder.build(), props);
stream.cleanUp();
stream.start();
System.out.println("Strema state : "+stream.state().name());
String queryableStoreName = groupedByUniqueId.queryableStoreName();
/*
* ReadOnlyKeyValueStore keyValStore1 =
* waitUntilStoreIsQueryable(queryableStoreName, (QueryableStoreTypes)
* QueryableStoreTypes.keyValueStore(),stream);
*/ ReadOnlyKeyValueStore<Long , TrackingEvent> keyValStore = stream.store(queryableStoreName, QueryableStoreTypes.<Long,TrackingEvent>keyValueStore());
// System.out.println("results --> "+keyValStore.get((long) 158));
//streams.close();
}
public static <T> T waitUntilStoreIsQueryable(final String storeName,
final QueryableStoreTypes queryableStoreType, final KafkaStreams streams) throws InterruptedException {
while (true) {
try {
return streams.store(storeName, (QueryableStoreType<T>) queryableStoreType);
} catch (InvalidStateStoreException ignored) {
// store not yet ready for querying
System.out.println("system is waitng to ready for state store");
Thread.sleep(100);
//streams.close();
}
}
}
}
我需要检索存储在状态存储中的数据,
要做的是,需要将其存储在本地并检索强文本
答案 0 :(得分:1)
在您的情况下,本地KafkaStreams实例尚未准备好,因此无法查询其本地状态存储。
在查询之前,您应该等待KafkaStreams处于RUNNING
状态。您需要给您打电话waitUntilStoreIsQueryable(...)
。
可以在Confluent github中找到示例:
有关原因的更多详细信息,请参见:https://docs.confluent.io/current/streams/faq.html#handling-invalidstatestoreexception-the-state-store-may-have-migrated-to-another-instance