嗨,我似乎无法为 Kafka流应用(在 java 11 jre 上运行)正确缩放我的pod并继续保持 < em> OOMKilled 容器。
工作包含大量并发值的汇总
KTable<String, MinuteValue> MinuteValuesKtable = builder.table(
"minuteTopicCompact",
Materialized.<String, MinuteValue, KeyValueStore<Bytes, byte[]>>with(Serdes.String(), minuteValueSerdes)
.withLoggingEnabled(new HashMap<>()));
KStream<String, MinuteAggreg> minuteAggByDay = MinuteValuesKtable
// rekey each MinuteValue and group them
.groupBy(
(key, minuteValue) -> new KeyValue<>(getAggKey(minuteValue), billLine), Serialized.with(Serdes.String(), billLineSerdes))
// aggregate to MinuteAggreg
.aggregate(
MinuteAggreg::new,
(String key, MinuteValue value, MinuteAggreg aggregate) -> aggregate.addLine(value),
(String key, MinuteValue value, MinuteAggreg aggregate) -> aggregate.removeLine(value),
Materialized.with(Serdes.String(), minuteAggregSerdes))
.toStream()
// [...] send to another topic
我试图调整这些值:
// memory sizing and caches
properties.put(StreamsConfig.WINDOW_STORE_CHANGE_LOG_ADDITIONAL_RETENTION_MS_CONFIG, 5 * 60 * 1000L);
// Enable record cache of size 8 MB.
properties.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 8 * 1024 * 1024L);
// Set commit interval to 1 second.
properties.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 1000);
我的 java 11 应用程序使用以下参数启动:
-XX:+UseContainerSupport
-XX:MaxRAMFraction=2
并且Pod有一些内存限制:
Limits:
cpu: 4
memory: 2Gi
Requests:
cpu: 2
memory: 1Gi
但是仍然出现pod失败,kubernetes用“ OOMKilled”删除了pod。
Kafka流专家可以帮助我调整这些值吗?
我已阅读: https://docs.confluent.io/current/streams/sizing.html#troubleshooting 和 https://kafka.apache.org/10/documentation/streams/developer-guide/memory-mgmt.html
但找不到用于调整的全面而简单的答案: