我在哪里可以看到这些 LogCleaner 统计信息?

时间:2021-06-08 15:45:04

标签: apache-kafka

我已经下载了 Apache Kafka 的源代码,我看到有一些统计信息打印在某处。在哪里可以找到有关日志清理器线程的这些信息?我没有在日志中看到它。

以下是 LogCleaner.scala 文件中的统计信息:

  val message =
    "%n\tLog cleaner thread %d cleaned log %s (dirty section = [%d, %d])%n".format(id, name, from, to) +
    "\t%,.1f MB of log processed in %,.1f seconds (%,.1f MB/sec).%n".format(mb(stats.bytesRead),
                                                                            stats.elapsedSecs,
                                                                            mb(stats.bytesRead/stats.elapsedSecs)) +
    "\tIndexed %,.1f MB in %.1f seconds (%,.1f Mb/sec, %.1f%% of total time)%n".format(mb(stats.mapBytesRead),
                                                                                       stats.elapsedIndexSecs,
                                                                                       mb(stats.mapBytesRead)/stats.elapsedIndexSecs,
                                                                                       100 * stats.elapsedIndexSecs/stats.elapsedSecs) +
    "\tBuffer utilization: %.1f%%%n".format(100 * stats.bufferUtilization) +
    "\tCleaned %,.1f MB in %.1f seconds (%,.1f Mb/sec, %.1f%% of total time)%n".format(mb(stats.bytesRead),
                                                                                       stats.elapsedSecs - stats.elapsedIndexSecs,
                                                                                       mb(stats.bytesRead)/(stats.elapsedSecs - stats.elapsedIndexSecs), 100 * (stats.elapsedSecs - stats.elapsedIndexSecs).toDouble/stats.elapsedSecs) +
    "\tStart size: %,.1f MB (%,d messages)%n".format(mb(stats.bytesRead), stats.messagesRead) +
    "\tEnd size: %,.1f MB (%,d messages)%n".format(mb(stats.bytesWritten), stats.messagesWritten) +
    "\t%.1f%% size reduction (%.1f%% fewer messages)%n".format(100.0 * (1.0 - stats.bytesWritten.toDouble/stats.bytesRead),
                                                               100.0 * (1.0 - stats.messagesWritten.toDouble/stats.messagesRead))
  info(message)

1 个答案:

答案 0 :(得分:1)

正如@OneCricketeer 指出的那样-您要查找的特定日志位于 log-cleaner.log 文件中。这是那里的条目示例:

[2021-06-08 07:45:24,132] INFO Cleaner 0: Cleaning segment 6692959 in log __consumer_offsets-29 (largest timestamp Tue Jun 08 07:45:13 EDT 2021) into 6692959, retaining deletes. (kafka.log.LogCleaner)
[2021-06-08 07:45:24,717] INFO Cleaner 0: Swapping in cleaned segment LogSegment(baseOffset=6692959, size=3331) for segment(s) List(LogSegment(baseOffset=6692959, size=104856549)) in log Log(/apps/kafka-data/__consumer_offsets-29) (kafka.log.LogCleaner)
[2021-06-08 07:45:24,717] INFO [kafka-log-cleaner-thread-0]:
 Log cleaner thread 0 cleaned log __consumer_offsets-29 (dirty section = [6692959, 6692959])
 100.0 MB of log processed in 1.3 seconds (76.6 MB/sec).
 Indexed 100.0 MB in 0.7 seconds (144.7 Mb/sec, 53.0% of total time)
 Buffer utilization: 0.0%
 Cleaned 100.0 MB in 0.6 seconds (162.9 Mb/sec, 47.0% of total time)
 Start size: 100.0 MB (944,401 messages)
 End size: 0.0 MB (31 messages)
 100.0% size reduction (100.0% fewer messages)
 (kafka.log.LogCleaner)

此外,除了这些日志之外,如果您对所有日志清理信息感兴趣,我在主 server.log 中发现了如下日志条目,这也非常有用 - 它们显示日志段何时被标记为删除以及原因:

[2021-05-19 21:55:05,828] INFO [Log partition=tracking.ap.client.traffic.keyed-2, dir=/apps/kafka-data] Found deletable segments with base offsets [11760980] due to retention time 2592000000ms breach (kafka.log.Log)
[2021-05-19 21:55:05,833] INFO [ProducerStateManager partition=tracking.ap.client.traffic.keyed-2] Writing producer snapshot at offset 11762941 (kafka.log.ProducerStateManager)
[2021-05-19 21:55:05,835] INFO [Log partition=tracking.ap.client.traffic.keyed-2, dir=/apps/kafka-data] Rolled new log segment at offset 11762941 in 7 ms. (kafka.log.Log)
[2021-05-19 21:55:05,835] INFO [Log partition=tracking.ap.client.traffic.keyed-2, dir=/apps/kafka-data] Scheduling log segment [baseOffset 11760980, size 1079204] for deletion. (kafka.log.Log)