当我运行我的Java应用程序并实例化KafkaConsumer对象时(以最少的必需属性提供:键和值解串器以及group_id);我在StdOut上看到很多INFO消息(如果我提供了不受支持的属性,我还会看到WARNING消息)。
我想看看何时发生提取事件。我假设通过将日志级别提高到DEBUG可以看到这一点。不幸的是,我无法增加它。
我试图以多种方式提供log4j.properties文件(将文件放置在特定路径上,并将其作为参数(-Dlog4j.configuration)传递。输出保持不变。
cd /Users/user/git/kafka/toys; JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home "/Applications/NetBeans/NetBeans 8.2.app/Contents/Resources/NetBeans/java/maven/bin/mvn" "-Dexec.args=-classpath %classpath ch.demo.toys.CarthusianConsumer" -Dexec.executable=/Library/Java/JavaVirtualMachines/jdk1.8.0_191.jdk/Contents/Home/bin/java -Dexec.classpathScope=runtime -DskipTests=true org.codehaus.mojo:exec-maven-plugin:1.2.1:exec
Running NetBeans Compile On Save execution. Phase execution is skipped and output directories of dependency projects (with Compile on Save turned on) will be used instead of their jar artifacts.
Scanning for projects...
------------------------------------------------------------------------
Building toys 1.0-SNAPSHOT
------------------------------------------------------------------------
--- exec-maven-plugin:1.2.1:exec (default-cli) @ toys ---
Jul 10, 2019 2:52:00 PM org.apache.kafka.common.config.AbstractConfig logAll
INFO: ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [kafka-server:9090, kafka-server:9091, kafka-server:9092]
check.crcs = true
client.dns.lookup = default
client.id =
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = carthusian-consumer
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.IntegerDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 100000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = DEBUG
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
Jul 10, 2019 2:52:01 PM org.apache.kafka.common.utils.AppInfoParser$AppInfo <init>
INFO: Kafka version: 2.3.0
Jul 10, 2019 2:52:01 PM org.apache.kafka.common.utils.AppInfoParser$AppInfo <init>
INFO: Kafka commitId: fc1aaa116b661c8a
Jul 10, 2019 2:52:01 PM org.apache.kafka.common.utils.AppInfoParser$AppInfo <init>
INFO: Kafka startTimeMs: 1562763121219
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.KafkaConsumer subscribe
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Subscribed to topic(s): sequence
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.Metadata update
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Cluster ID: REIXp5FySKGPHlRyfTALLQ
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler onSuccess
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Discovered group coordinator kafka-tds:9091 (id: 2147483646 rack: null)
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.ConsumerCoordinator onJoinPrepare
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Revoking previously assigned partitions []
Revoke event: []
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.AbstractCoordinator sendJoinGroupRequest
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] (Re-)joining group
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.AbstractCoordinator sendJoinGroupRequest
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] (Re-)joining group
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1 onSuccess
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Successfully joined group with generation 96
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.ConsumerCoordinator onJoinComplete
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Setting newly assigned partitions: sequence-1, sequence-0
Assignment event: [sequence-1, sequence-0]
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.SubscriptionState lambda$requestOffsetReset$3
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Seeking to EARLIEST offset of partition sequence-1
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.SubscriptionState lambda$requestOffsetReset$3
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Seeking to EARLIEST offset of partition sequence-0
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.SubscriptionState maybeSeekUnvalidated
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Resetting offset for partition sequence-0 to offset 0.
Jul 10, 2019 2:52:01 PM org.apache.kafka.clients.consumer.internals.SubscriptionState maybeSeekUnvalidated
INFO: [Consumer clientId=consumer-1, groupId=carthusian-consumer] Resetting offset for partition sequence-1 to offset 0.
Loaded 9804 records from [sequence-0] partitions
Loaded 9804 records from [sequence-1] partitions
Loaded 9799 records from [sequence-0] partitions
Loaded 9799 records from [sequence-1] partitions
Loaded 9799 records from [sequence-0] partitions
Loaded 9799 records from [sequence-1] partitions
Loaded 9799 records from [sequence-0] partitions
Loaded 9799 records from [sequence-1] partitions
Loaded 9799 records from [sequence-0] partitions
答案 0 :(得分:1)
通过将以下(简单的)log4j.properties放置在src / main / resources下并直接从控制台(而不是从IDE)运行该应用程序来解决。现在显示正在提取消息。
# Root logger option
log4j.rootLogger=DEBUG, stdout
# Direct log messages to stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1} - %m%n
目前,我不知道哪个类正在生成我要查找的消息,因此,rootLogger上的DEBUG设置。