为什么状态存储因序列化问题而失败?

时间:2018-12-27 16:45:57

标签: apache-kafka apache-kafka-streams

我使用Kafka Streams 1.1.0。

我创建了以下拓扑:

Topologies:
   Sub-topology: 0
    Source: KSTREAM-SOURCE-0000000001 (topics: [configurationTopicName])
      --> KTABLE-SOURCE-0000000002
    Processor: KTABLE-SOURCE-0000000002 (stores: [configurationTopicName-STATE-STORE-0000000000])
      --> KTABLE-MAPVALUES-0000000003
      <-- KSTREAM-SOURCE-0000000001
    Processor: KTABLE-MAPVALUES-0000000003 (stores: [configuration_store_application1])
      --> none
      <-- KTABLE-SOURCE-0000000002

代码如下:

case class Test(name: String, age: Int)
val mal: Materialized[String, Test, KeyValueStore[Bytes, Array[Byte]]] =
  Materialized.as[String, Test, KeyValueStore[Bytes, Array[Byte]]](configurationStoreName(applicationId))
builder.table(configurationTopicName, Consumed.`with`(Serdes.String(), Serdes.String()))
  .someAdditionalTransformation
  .mapValues[Test](
      new ValueMapperWithKey[String, String, Test] {
         override def apply(readOnlyKey: String, value: String): Test = Test("aaa", 432)
      }, mal)

我想建立一个可查询的商店,该商店可用于以后查询(检索过滤/转换后的值)。

我已经使用TopologyTestDriver运行了一个简单的测试,并抛出了以下异常:

  

原因:java.lang.ClassCastException:com.example.kafka.streams.topology.Test无法转换为java.lang.String       在org.apache.kafka.common.serialization.StringSerializer.serialize(StringSerializer.java:28)       在org.apache.kafka.streams.state.StateSerdes.rawValue(StateSerdes.java:178)       在org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore $ 1.innerValue(MeteredKeyValueBytesStore.java:66)       在org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore $ 1.innerValue(MeteredKeyValueBytesStore.java:57)       在org.apache.kafka.streams.state.internals.InnerMeteredKeyValueStore.put(InnerMeteredKeyValueStore.java:198)       在org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore.put(MeteredKeyValueBytesStore.java:117)       在org.apache.kafka.streams.kstream.internals.KTableMapValues $ KTableMapValuesProcessor.process(KTableMapValues.java:103)       在org.apache.kafka.streams.kstream.internals.KTableMapValues $ KTableMapValuesProcessor.process(KTableMapValues.java:83)       在org.apache.kafka.streams.processor.internals.ProcessorNode $ 1.run(ProcessorNode.java:46)       在org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:208)       在org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124)       在org.apache.kafka.streams.processor.internals.AbstractProcessorContext.forward(AbstractProcessorContext.java:174)       在org.apache.kafka.streams.kstream.internals.KTableFilter $ KTableFilterProcessor.process(KTableFilter.java:89)       在org.apache.kafka.streams.kstream.internals.KTableFilter $ KTableFilterProcessor.process(KTableFilter.java:63)       在org.apache.kafka.streams.processor.internals.ProcessorNode $ 1.run(ProcessorNode.java:46)       在org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:208)       在org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124)       在org.apache.kafka.streams.processor.internals.AbstractProcessorContext.forward(AbstractProcessorContext.java:174)       在org.apache.kafka.streams.kstream.internals.ForwardingCacheFlushListener.apply(ForwardingCacheFlushListener.java:42)       在org.apache.kafka.streams.state.internals.CachingKeyValueStore.putAndMaybeForward(CachingKeyValueStore.java:101)       在org.apache.kafka.streams.state.internals.CachingKeyValueStore.access $ 000(CachingKeyValueStore.java:38)       在org.apache.kafka.streams.state.internals.CachingKeyValueStore $ 1.apply(CachingKeyValueStore.java:83)       在org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:142)       在org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:100)       在org.apache.kafka.streams.state.internals.ThreadCache.flush(ThreadCache.java:127)       在org.apache.kafka.streams.state.internals.CachingKeyValueStore.flush(CachingKeyValueStore.java:123)       在org.apache.kafka.streams.state.internals.InnerMeteredKeyValueStore.flush(InnerMeteredKeyValueStore.java:267)       在org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore.flush(MeteredKeyValueBytesStore.java:149)       在org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:244)       ... 58更多

有人知道为什么以及如何解决它吗?

1 个答案:

答案 0 :(得分:0)

经过一番调查,我发现了上述异常的原因。

我已经创建了Materialized来存储数据,但没有为键或值传递任何Serdes。

如果您未通过任何验证,则使用默认值。在我的情况下是StringSerializer,我正在尝试使用StringSerializer me culpa

来序列化Test类的对象

仅需添加传递Serdes .withValueSerde(GenericSerde[Test])即可,其中GenericSerdes是org.apache.kafka.common.serialization.Serde的实现

val mal: Materialized[String, Test, KeyValueStore[Bytes, Array[Byte]]] =
  Materialized.as[String, Test, KeyValueStore[Bytes, Array[Byte]]](configurationStoreName(applicationId))
    .withValueSerde(GenericSerde[Test])