我要用卡夫卡流计算平均值。因此,我需要创建一个状态存储,即聚合,这需要创建一个状态存储,但是这种情况不会发生。
这里是平均值的函数:
private void average () {
StreamsBuilder builder = new StreamsBuilder();
KStream<GenericRecord, GenericRecord> source =
builder.stream(this.topicSrc);
KStream <String, Double> average = source
.mapValues(value -> createJson(value.toString()))
.map((key, value) -> KeyValue.pair(this.variable, value.getNumberValue(this.pathVariable, this.variable)))
.groupByKey( Serialized.with(
Serdes.String(),
Serdes.String()))
.aggregate (
() -> new Tuple(0, 0),
(aggKey, newValue, aggValue) -> new Tuple (aggValue.occ + 1, aggValue.sum + Integer.parseInt(newValue)),
Materialized.with(Serdes.String(), new MySerde()))
.mapValues(v -> v.getAverage())
.toStream();
average.to(this.topicDest, Produced.with(Serdes.String(), Serdes.Double()));
KafkaStreams stream = new KafkaStreams(builder.build(), props);
stream.start();
}
例外:
Exception in thread "Thread-0" org.apache.kafka.streams.errors.StreamsException: org.apache.kafka.streams.errors.ProcessorStateException: base state directory [/tmp/kafka-streams] doesn't exist and couldn't be created
at org.apache.kafka.streams.KafkaStreams.<init>(KafkaStreams.java:658)
at org.apache.kafka.streams.KafkaStreams.<init>(KafkaStreams.java:628)
at org.apache.kafka.streams.KafkaStreams.<init>(KafkaStreams.java:538)
at it.imolinfo.sacmi.processor.Streamer.average(Streamer.java:167)
at it.imolinfo.sacmi.processor.Streamer.run(Streamer.java:180)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.streams.errors.ProcessorStateException: base state directory [/tmp/kafka-streams] doesn't exist and couldn't be created
at org.apache.kafka.streams.processor.internals.StateDirectory.<init>(StateDirectory.java:80)
at org.apache.kafka.streams.KafkaStreams.<init>(KafkaStreams.java:656)
... 5 more
问题是基本目录不存在,但我希望kafka流在必要时创建目录。
---编辑----- 我注意到,如果我有1个处理器可以对一个变量求平均值,那么没有问题,但是如果我有2个处理器则可以。
1个处理器的配置文件:
type->streamer
number->1
subtype->average
variabli->payload:T_DUR_CICLO
topicSrc->m0-tempi
topicDest->average
application.id->stream0
bootstrap.servers->localhost:9092
schema.registry.url->http://localhost:8081
default.key.serde->io.confluent.kafka.streams.serdes.avro.GenericAvroSerde
default.value.serde->io.confluent.kafka.streams.serdes.avro.GenericAvroSerde
2个处理器的配置文件:
type->streamer
number->1
subtype->average
variabli->payload:T_DUR_CICLO
topicSrc->m0-tempi
topicDest->average
application.id->stream0
bootstrap.servers->localhost:9092
schema.registry.url->http://localhost:8081
default.key.serde->io.confluent.kafka.streams.serdes.avro.GenericAvroSerde
default.value.serde->io.confluent.kafka.streams.serdes.avro.GenericAvroSerde
type->streamer
number->1
subtype->average
variabli->payload:HMI_TEMP_E1
topicSrc->m0-temperature
topicDest->average
application.id->stream1
bootstrap.servers->localhost:9092
schema.registry.url->http://localhost:8081
default.key.serde->io.confluent.kafka.streams.serdes.avro.GenericAvroSerde
default.value.serde->io.confluent.kafka.streams.serdes.avro.GenericAvroSerde
现在我启动处理器:
private void loadStreamer (Tuple t){
int number = Integer.parseInt(t.getNumber());
for (int i = 0; i < number; i ++) {
String[] splitted = t.getVariables()[0].split(":");
Streamer s = new Streamer (t.getSubType(), t.getTopicSrc(), t.getTopicDest(), splitted[0], splitted[1], t.getProp());
Thread th = new Thread (s);
th.start();
}
}
类型Tuple包含配置文件的信息。 for cicle中的数字是配置文件中存在的数字。在这种情况下为1,但是我可以为同一个容忍度做更多相同过程的实例。
流光:
public class Streamer implements Runnable{
private final String topicSrc;
private final String topicDest;
private final String variable;
private final String pathVariable;
private final String type;
private final Properties props;
public Streamer (String type, String topicSrc, String topicDest, String pathVariable, String variable, Properties props) {
this.type = type;
this.topicSrc = topicSrc;
this.topicDest = topicDest;
this.variable = variable;
this.pathVariable = pathVariable;
this.props = props;
}
private void average () {
StreamsBuilder builder = new StreamsBuilder();
KStream<GenericRecord, GenericRecord> source =
builder.stream(this.topicSrc);
KStream <String, Double> average = source
.mapValues(value -> createJson(value.toString()))
.map((key, value) -> KeyValue.pair(this.variable, value.getNumberValue(this.pathVariable, this.variable) + ":" + value.getStringValue("timestamp")))
.groupByKey( Serialized.with(
Serdes.String(),
Serdes.String()))
.aggregate (
() -> new Tuple(0, 0, ""),
(aggKey, newValue, aggValue) -> new Tuple (aggValue.occ + 1, aggValue.sum + Integer.parseInt(newValue.split(":")[0]), newValue.split(":")[1]),
Materialized.with(Serdes.String(), new MySerde()))
.mapValues((key, value) -> new AverageRecord (key, value.getDate(), value.getAverage()))
.mapValues(v -> v.getAverage())
.toStream();
average.to(this.topicDest, Produced.with(Serdes.String(), Serdes.Double()));
KafkaStreams stream = new KafkaStreams(builder.build(), props);
stream.start();
}
public void run() {
switch (this.type) {
case "average":
average();
break;
case "filter":
filter();
break;
default:
System.out.println("type not valid " + this.type);
break;
}
所以我有2个线程和2个Streamer对象,它们执行平均功能。 与2号拖缆唯一不同的是用于计算平均值的变量。
我以错误的方式创建流程吗?
答案 0 :(得分:1)
为每个流添加一个不同的state.dir
配置,而不是默认配置。
# stream1
...
state.dir=/tmp/stream1/kafka-stream
# stream2
...
state.dir=/tmp/stream2/kafka-stream
答案 1 :(得分:0)
这似乎是权限问题。如果Kafka Stream应用程序有权在给定路径中写入,它将创建状态目录。
/tmp
目录应该对正在运行该应用程序的用户具有写权限。
答案 2 :(得分:0)
您需要做的就是在开始流之前执行new File("/tmp/kafka-streams").mkdirs()
。
KafkaStreams启动器上存在竞争条件。
答案 3 :(得分:0)
对我来说,这个错误只是由于磁盘已满(好吧,Kubernetes PVC)并增加它解决了问题。有点神秘!