我们有一个日志收集服务,可以自动拆分64KB的邮件,但拆分并不优雅。我们将单个日志消息打印为带有一些额外元数据的json blob。有时这些包括我们想要完整保留的大堆栈跟踪。
所以我正在研究编写一个自定义记录器或appender包装器,它将获取消息并将其拆分为更小的块并重新记录它,但这看起来并不重要。
如果邮件大小超过某个值,是否有一种简单的方法可以配置回退以将其邮件拆分为多个单独的邮件?
这是appender配置:
<!-- Sumo optimized rolling log file -->
<appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender">
<Append>true</Append>
<file>${log.dir}/${service.name}-sumo.log</file>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<fieldName>t</fieldName>
<pattern>yyyy-MM-dd'T'HH:mm:ss.SSS'Z'</pattern>
<timeZone>UTC</timeZone>
</timestamp>
<message/>
<loggerName/>
<threadName/>
<logLevel/>
<stackTrace>
<if condition='isDefined("throwable.converter")'>
<then>
<throwableConverter class="${throwable.converter}"/>
</then>
</if>
</stackTrace>
<mdc/>
<tags/>
</providers>
</encoder>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<maxIndex>1</maxIndex>
<FileNamePattern>${log.dir}/${service.name}-sumo.log.%i</FileNamePattern>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<MaxFileSize>256MB</MaxFileSize>
</triggeringPolicy>
</appender>
<appender name="sumo" class="ch.qos.logback.classic.AsyncAppender">
<queueSize>500</queueSize>
<discardingThreshold>0</discardingThreshold>
<appender-ref ref="file" />
</appender>
答案 0 :(得分:2)
我想出的解决方案就是将我的记录器包装成可以很好地分割消息的东西。请注意,我主要对使用Throwable
分割消息感兴趣,因为这些消息会导致长消息。
使用lambdas编写Java 8
另请注意,此代码未经过全面测试,如果发现任何错误,我会更新。
public class MessageSplittingLogger extends MarkerIgnoringBase {
//Target size is 64k for split. UTF-8 nominally has 1 byte characters, but some characters will use > 1 byte so leave some wiggle room
//Also leave room for additional messages
private static final int MAX_CHARS_BEFORE_SPLIT = 56000;
private static final String ENCODING = "UTF-8";
private Logger LOGGER;
public MessageSplittingLogger(Class<?> clazz) {
this.LOGGER = LoggerFactory.getLogger(clazz);
}
private void splitMessageAndLog(String msg, Throwable t, Consumer<String> logLambda) {
String combinedMsg = msg + (t != null ? "\nStack Trace:\n" + printStackTraceToString(t) : "");
int totalMessages = combinedMsg.length() / MAX_CHARS_BEFORE_SPLIT;
if(combinedMsg.length() % MAX_CHARS_BEFORE_SPLIT > 0){
totalMessages++;
}
int index = 0;
int msgNumber = 1;
while (index < combinedMsg.length()) {
String messageNumber = totalMessages > 1 ? "(" + msgNumber++ + " of " + totalMessages + ")\n" : "";
logLambda.accept(messageNumber + combinedMsg.substring(index, Math.min(index + MAX_CHARS_BEFORE_SPLIT, combinedMsg.length())));
index += MAX_CHARS_BEFORE_SPLIT;
}
}
/**
* Get the stack trace as a String
*/
private String printStackTraceToString(Throwable t) {
try {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
PrintStream ps = new PrintStream(baos, true, ENCODING);
t.printStackTrace(ps);
return baos.toString(ENCODING);
} catch (UnsupportedEncodingException e) {
return "Exception printing stack trace: " + e.getMessage();
}
}
@Override
public String getName() {
return LOGGER.getName();
}
@Override
public boolean isTraceEnabled() {
return LOGGER.isTraceEnabled();
}
@Override
public void trace(String msg) {
LOGGER.trace(msg);
}
@Override
public void trace(String format, Object arg) {
LOGGER.trace(format, arg);
}
@Override
public void trace(String format, Object arg1, Object arg2) {
LOGGER.trace(format, arg1, arg2);
}
@Override
public void trace(String format, Object... arguments) {
LOGGER.trace(format, arguments);
}
@Override
public void trace(String msg, Throwable t) {
splitMessageAndLog(msg, t, LOGGER::trace);
}
//... Similarly wrap calls to debug/info/error
}