为什么日志对Akka流不起作用

时间:2019-05-02 19:06:23

标签: scala logging akka akka-stream alpakka

我正在使用Alpakka,下面有玩具示例:

val system = ActorSystem("system")
implicit val materializer: ActorMaterializer = ActorMaterializer.create(system)

implicit val adapter: LoggingAdapter = Logging(system, "customLogger")
implicit val ec: ExecutionContextExecutor = system.dispatcher

val log = Logger(this.getClass, "Foo")

val consumerConfig = system.settings.config.getConfig("akka.kafka.consumer")
val consumerSettings: ConsumerSettings[String, String] =
  ConsumerSettings(consumerConfig, new StringDeserializer, new StringDeserializer)
    .withBootstrapServers("localhost:9092")
    .withGroupId("my-group")

def start() = {
  Consumer.plainSource(consumerSettings, Subscriptions.topics("test"))
    .log("My Consumer: ")
    .withAttributes(
      Attributes.logLevels(
        onElement = Logging.InfoLevel,
        onFinish = Logging.InfoLevel,
        onFailure = Logging.DebugLevel
      )
    )
    .filter(//some predicate)
    .map(// some process)
    .map(out => ByteString(out))
    .runWith(LogRotatorSink(timeFunc))
    .onComplete {
      case Success(_) => log.info("DONE")
      case Failure(e) => log.error("ERROR")
    }
}

此代码有效。但是我在记录问题。具有属性的第一部分记录良好。当元素进入时,它将日志记录到标准输出。但是,当LogRotatorSink完成并且将来完成时,我想将DONE打印到标准输出。这是行不通的。正在生成文件,因此进程正在运行,但没有“ DONE”消息传递到标准输出。

请问如何将“完成”转换为标准输出?

akka {

  # Loggers to register at boot time (akka.event.Logging$DefaultLogger logs
  # to STDOUT)
  loggers = ["akka.event.slf4j.Slf4jLogger"]

  # Log level used by the configured loggers (see "loggers") as soon
  # as they have been started; before that, see "stdout-loglevel"
  # Options: OFF, ERROR, WARNING, INFO, DEBUG
  loglevel = "INFO"

  # Log level for the very basic logger activated during ActorSystem startup.
  # This logger prints the log messages to stdout (System.out).
  # Options: OFF, ERROR, WARNING, INFO, DEBUG
  stdout-loglevel = "INFO"

  # Filter of log events that is used by the LoggingAdapter before
  # publishing log events to the eventStream.
  logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"

}


<configuration>

    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%highlight(%date{HH:mm:ss.SSS} %-5level %-50.50([%logger{50}])) - %msg%n</pattern>
        </encoder>
    </appender>

    <logger name="org.apache.kafka" level="INFO"/>

    <root level="INFO">
        <appender-ref ref="STDOUT"/>
    </root>

</configuration>

1 个答案:

答案 0 :(得分:1)

日志正在运行-您的handleChange(type, event) { let selectedNextHops = JSON.parse(JSON.stringify(this.state.selectedNextHops)); let remainingNextHops = []; if(type === 'nexthop') { selectedNextHops = selectedNextHops.filter(nh => nh !== ''); isContentChanged = true; if(selectedNextHops.indexOf(event.target.value) === -1) { selectedNextHops.push(event.target.value); } remainingNextHops = this.props.nextHopSapds.filter(nextHop => selectedNextHops.indexOf(nextHop) === -1); if(remainingNextHops.length !== 0) { selectedNextHops.push(''); } this.setState({ selectedNextHops: selectedNextHops, remainingNextHops: remainingNextHops }); } } 并没有结束,因为Kafka Future是一个无限的流-当它将读取所有内容并到达主题中的最新消息时,它将等待新消息出现-在许多情况下,例如甚至源源不断地关闭此类流都是灾难,因此默认情况下无限运行流是理智的选择。

该流何时真正结束?清楚地定义此条件,您将可以使用Consumer.take(n).takeUntil(cond)之类的工具在明确定义的条件下将其关闭。然后流将关闭,.takeWithin(time)将完成,并且您的Future将被打印。