Kafka Streams中的RocksDB异常

时间:2019-03-25 11:57:31

标签: lambda apache-kafka kafka-consumer-api apache-kafka-streams

在一个简单的Kafka Stream程序中,当我使用以下代码时,它可以正常工作而不会引发任何错误:

      KTable<String, Long> result= source.mapValues(textLine
      ->textLine.toLowerCase()) .flatMapValues(lowercasedTextLine ->
      Arrays.asList(lowercasedTextLine.split(" "))) .selectKey((ignoredKey,word) ->
      word) .groupByKey() .count("Counts");

      result.to(Serdes.String(), Serdes.Long(), "wc-output");

但是,当我使用下面的代码时,我得到了错误:

    KStream<String, String> source = builder.stream("wc-input");
    source.groupBy((key, word) -> word).windowedBy(TimeWindows.of(TimeUnit.SECONDS.toMillis(5000))).count()
            .toStream().map((key, value) -> new KeyValue<>(key.key(), value))
            .to("wc-output", Produced.with(Serdes.String(), Serdes.Long()));
  

线程异常   “ streams-wordcount-b160d715-f0e0-42ee-831e-0e4eed7e9424-StreamThread-1”   org.apache.kafka.streams.errors.StreamsException:捕获了异常   处理。 taskId = 1_0,处理器= KSTREAM-SOURCE-0000000006,   topic = streams-wordcount-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition,   分区= 0,偏移= 0,位于   org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:232)     在   org.apache.kafka.streams.processor.internals.AssignedTasks.process(AssignedTasks.java:403)     在   org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:317)     在   org.apache.kafka.streams.processor.internals.StreamThread.processAndMaybeCommit(StreamThread.java:942)     在   org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:822)     在   org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:774)     在   org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:744)   引起原因:org.apache.kafka.streams.errors.ProcessorStateException:   开设商店时出错   位于位置的KSTREAM-AGGREGATE-STATE-STORE-0000000002:1553472000000   \ tmp \ kafka-streams \ streams-wordcount \ 1_0 \ KSTREAM-AGGREGATE-STATE-STORE-0000000002 \ KSTREAM-AGGREGATE-STATE-STORE-0000000002:1553472000000     在   org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:204)     在   org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:174)     在   org.apache.kafka.streams.state.internals.Segment.openDB(Segment.java:40)     在   org.apache.kafka.streams.state.internals.Segments.getOrCreateSegment(Segments.java:89)     在   org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStore.put(RocksDBSegmentedBytesStore.java:81)     在   org.apache.kafka.streams.state.internals.RocksDBWindowStore $ RocksDBWindowBytesStore.put(RocksDBWindowStore.java:43)     在   org.apache.kafka.streams.state.internals.RocksDBWindowStore $ RocksDBWindowBytesStore.put(RocksDBWindowStore.java:34)     在   org.apache.kafka.streams.state.internals.ChangeLoggingWindowBytesStore.put(ChangeLoggingWindowBytesStore.java:67)     在   org.apache.kafka.streams.state.internals.ChangeLoggingWindowBytesStore.put(ChangeLoggingWindowBytesStore.java:33)     在   org.apache.kafka.streams.state.internals.CachingWindowStore $ 1.apply(CachingWindowStore.java:100)     在   org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:141)     在   org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:232)     在   org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:245)     在   org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:153)     在   org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:157)     在   org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:36)     在   org.apache.kafka.streams.state.internals.MeteredWindowStore.put(MeteredWindowStore.java:96)     在   org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate $ KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:122)     在   org.apache.kafka.streams.processor.internals.ProcessorNode $ 1.run(ProcessorNode.java:46)     在   org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:208)     在   org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124)     在   org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:85)     在   org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:80)     在   org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:216)     ... 6更多原因:org.rocksdb.RocksDBException:创建失败   目录:   H:\ tmp \ kafka-streams \ streams-wordcount \ 1_0 \ KSTREAM-AGGREGATE-STATE-STORE-0000000002 \ KSTREAM-AGGREGATE-STATE-STORE-0000000002:1553472000000:   org.rocksdb.RocksDB.open(本机方法)处的参数无效   org.rocksdb.RocksDB.open(RocksDB.java:231)在   org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:197)

1 个答案:

答案 0 :(得分:1)

使用窗口聚合时,使用不同的名称存储,并且卡夫卡public void onStartup(ServletContext container) throws ServletException { FilterRegistration corsFilterReg = servletContext.addFilter("CorsFilter", CorsFilter.class); corsFilterReg.addMappingForUrlPatterns(null, false, "/servlet/cors/*"); 中存在一个影响Windows操作系统的错误:窗口存储的名称包含1.0.0,这在Windows操作系统中是不允许的。该错误已在版本:1.0.1

中修复

Cf。 https://issues.apache.org/jira/browse/KAFKA-6167