将kafka日志消息重定向到单独的日志中(而不是catalina.out)

时间:2019-04-22 18:31:38

标签: java apache-kafka logback spring-kafka

我有一个使用Spring Boot设置的项目,我在其中使用Spring Kafka使用消息。该应用程序部署在独立的Tomcat实例上。 Spring Kafka会生成许多日志消息,它们会自动转到catalina.out。我是否可以将这些Kafka日志消息重定向到为应用程序创建的单独日志中(dataRiver日志)?

我正在使用logback进行记录。

这是我的logback-spring.xml的样子:

<property name="LOGS" value="${catalina.base}/logs" />

<include resource="org/springframework/boot/logging/logback/base.xml" />

<appender name="DATA-RIVER" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>${LOGS}/DataRiver.log</file>
    <append>true</append>
    <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
        <Pattern>%p %d %C{1.} [%t-[%X{threadid}]] %m%n</Pattern>
    </encoder>

    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
        <fileNamePattern>${LOGS}/archived/DataRiver-%d{yyyy-MM-dd}.log</fileNamePattern>
    </rollingPolicy>
</appender>

<appender name="ERROR" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>${LOGS}/DataRiver-Err.log</file>
    <append>true</append>
    <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
        <Pattern>%p %d{ISO8601} [%t-[%X{threadid}]] - %m%n</Pattern>
    </encoder>

    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
        <fileNamePattern>${LOGS}/archived/DataRiver-Err-%d{yyyy-MM-dd}.log</fileNamePattern>
    </rollingPolicy>
</appender>

<logger name="dataRiver" level="INFO" additivity="false">
    <appender-ref ref="DATA-RIVER"/>
</logger>

<logger name="error" level="WARN" additivity="false">
    <appender-ref ref="ERROR"/>
</logger>

这是我的日志记录服务:

public class LoggingService {

  public static final Logger LOGGER_DATA_RIVER = LoggerFactory.getLogger("dataRiver");
  public static final Logger LOGGER_ERROR = LoggerFactory.getLogger("error");

}

这是我的使用者配置:

@Bean
public ConsumerFactory<String, GenericData.Record> testConsumerFactoryFirst() {
  Map<String, Object> dataRiverProps = setTestDataRiverProps();
  dataRiverProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, env.getProperty("test1.bootstrap.servers"));
  dataRiverProps.put(KafkaAvroDeserializerConfig.SCHEMA_REGISTRY_URL_CONFIG, env.getProperty("test1.schema.registry.url"));
  return new DefaultKafkaConsumerFactory<>(dataRiverProps);
}

private ConcurrentKafkaListenerContainerFactory<String, GenericData.Record> testKafkaListenerContainerFactory(ConsumerFactory<String, GenericData.Record> consumerFactory) {
  ConcurrentKafkaListenerContainerFactory<String, GenericData.Record> factory = new ConcurrentKafkaListenerContainerFactory<>();
  factory.setConsumerFactory(consumerFactory);
  factory.setBatchListener(true);
  return factory;
}

@Bean
public ConcurrentKafkaListenerContainerFactory<String, GenericData.Record> testKafkaListenerContainerFactoryFirst() {
  return testKafkaListenerContainerFactory(testConsumerFactoryFirst());
}

这里是消费者:

@KafkaListener(topics = "#{'${test.kafka.topics}'.split(',')}", containerFactory = "testKafkaListenerContainerFactoryFirst")
public void consumeAvroFirst(List<Message<GenericData.Record>> list) {
  consumeJsonMessageBatch(convertAvroToJsonBatch(list), ""Kafka Consumer test-1");
}

private List<String> convertAvroToJsonBatch(List<Message<GenericData.Record>> list) {
  return list.stream().map(record -> record.getPayload().toString()).collect(Collectors.toList());
}

这是catalina.out的摘录:

2019-04-20 02:16:54.190  WARN 18286 --- [ntainer#1-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-12, groupId=DataRiver1] 10 partitions have leader brokers without a matching listener, including [test.autoEvents-8, test.autoEvents-2, test.autoEvents-4, test.loginEvaluationEvents-17, test.loginEvaluationEvents-11, test.loginEvaluationEvents-5, test.loginEvaluationEvents-13, test.loginEvaluationEvents-7, test.loginEvaluationEvents-1, test.loginEvaluationEvents-19]
2019-04-20 02:16:54.320  WARN 18286 --- [ntainer#1-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-12, groupId=DataRiver1] 10 partitions have leader brokers without a matching listener, including [test.autoEvents-8, test.autoEvents-2, test.autoEvents-4, test.loginEvaluationEvents-17, test.loginEvaluationEvents-11, test.loginEvaluationEvents-5, test.loginEvaluationEvents-13, test.loginEvaluationEvents-7, test.loginEvaluationEvents-1, test.loginEvaluationEvents-19]
2019-04-20 02:16:54.320  WARN 18286 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-10, groupId=DataRiver1] 10 partitions have leader brokers without a matching listener, including [test.autoEvents-8, test.autoEvents-2, test.autoEvents-4, test.loginEvaluationEvents-17, test.loginEvaluationEvents-11, test.loginEvaluationEvents-5, test.loginEvaluationEvents-13, test.loginEvaluationEvents-7, test.loginEvaluationEvents-1, test.loginEvaluationEvents-19]
2019-04-20 02:16:54.346  WARN 18286 --- [ntainer#2-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-2, groupId=DataRiver1] 10 partitions have leader brokers without a matching listener, including [test.autoEvents-8, test.autoEvents-2, test.autoEvents-4, test.loginEvaluationEvents-17, test.loginEvaluationEvents-11, test.loginEvaluationEvents-5, test.loginEvaluationEvents-13, test.loginEvaluationEvents-7, test.loginEvaluationEvents-1, test.loginEvaluationEvents-19]

感谢您的帮助!

1 个答案:

答案 0 :(得分:2)

在logback-spring.xml中创建一个新的附加程序。在附加器中添加一个过滤器。创建一个过滤器类以匹配Kafka日志。将追加程序输出到您想要的位置。

Appender:

<appender name="foo" class="com.foo.bar.SomeClass">
  <target>bar</target>
  <filter class="com.foo.KafkaFilter" />
   ...
</appender>

过滤器类:

public class KafkaFilter extends AbstractMatcherFilter {
    @Override
    public FilterReply decide(Object o) {

        LoggingEvent loggingEvent = (LoggingEvent) o;

        if(loggingEvent.getMessage().contains("org.apache.kafka")) {
            return FilterReply.ACCEPT;
        }
    }
}

我建议在这里查看以下答案:https://stackoverflow.com/a/5653532/10368579

LogBack文档在此处提供了有关匹配器的更多信息: https://logback.qos.ch/manual/filters.html#matcher