更新2018年8月15日 我执行了 strace 来监视系统调用 mprotect ,发现它确实阻塞了几秒钟。
$domains=Domains::find()
->Where('expirydate BETWEEN NOW() AND DATE_ADD(NOW(), INTERVAL 1 MONTH)')
->andWhere(['or',['status'=> 'Active'],['status'=> 'Pending Transfer']])
->orderBy(['expirydate' => SORT_ASC])
->all();
但是我没有发现reaszon。
更新2018年8月14日 我发现这是一个JVM STW事件。 我使用以下选项调试了JVM
strace -f -e trace=mprotect,mmap,munmap -T -t -p `pidof java` 2>&1 |tee mp1.txt
[pid 27007] 03:52:48 mprotect(0x7f9766226000, 4096, PROT_NONE) = 0 <3.631704>
在下面找到一些日志
-XX:+PrintGCApplicationStoppedTime
-XX:+PrintSafepointStatistics
-XX:PrintSafepointStatisticsCount=1
-XX:+SafepointTimeout
-XX:SafepointTimeoutDelay=500
奇怪的是,旋转/阻止时间为零,而同步时间为3301。 我基于开放的jdk 1.8编译了一个JVM,并向其中添加了一些调试日志,发现它在以下代码中被阻止,
vmop [threads: total initially_running wait_to_block] [time: spin block sync cleanup vmop] page_trap_count
488.188: no vm operation [ 73 1 1 ] [ 1 0 3301 0 0 ] 1
2018-08-13T22:16:09.744-0400: 491.491: Total time for which application threads were stopped: 3.3021375 seconds, Stopping threads took: 3.3018193 seconds
在函数os :: make_polling_page_un可读中,调用:: mprotect,具有semphore依赖性,
void SafepointSynchronize::begin() {
... ...
if (UseCompilerSafepoints && DeferPollingPageLoopCount < 0) {
// Make polling safepoint aware
guarantee (PageArmed == 0, "invariant") ;
PageArmed = 1 ;
os::make_polling_page_unreadable();
}
... ...
}
我怀疑信号词 mmap_sem 的争用会导致此STW事件。但是我不知道哪个函数导致了这个? 这里有帮助吗?
原始问题
我现在正在测试Kafka的性能。我在一个由36个分区和4个副本组成的6个节点的群集中创建了一个主题。一个Zookeeper节点在单独的节点上运行。
down_write(¤t->mm->mmap_sem);
我运行了两个生产者实例, kafka-producer-perf-test
kafka-topics --create --topic kf.p36.r4 --zookeeper l2 --partitions 36 --replication-factor 4
[root@g9csf002-0-0-3 kafka]# kafka-topics --describe --zookeeper l2 --topic kf.p36.r4
Topic:kf.p36.r4 PartitionCount:36 ReplicationFactor:4 Configs:
Topic: kf.p36.r4 Partition: 0 Leader: 1 Replicas: 1,5,6,2 Isr: 5,2,6,1
Topic: kf.p36.r4 Partition: 1 Leader: 2 Replicas: 2,6,1,3 Isr: 1,3,6,2
Topic: kf.p36.r4 Partition: 2 Leader: 3 Replicas: 3,1,2,4 Isr: 3,4,2,1
Topic: kf.p36.r4 Partition: 3 Leader: 4 Replicas: 4,2,3,5 Isr: 3,2,4,5
Topic: kf.p36.r4 Partition: 4 Leader: 5 Replicas: 5,3,4,6 Isr: 3,6,4,5
Topic: kf.p36.r4 Partition: 5 Leader: 6 Replicas: 6,4,5,1 Isr: 4,5,6,1
Topic: kf.p36.r4 Partition: 6 Leader: 1 Replicas: 1,6,2,3 Isr: 3,6,2,1
Topic: kf.p36.r4 Partition: 7 Leader: 2 Replicas: 2,1,3,4 Isr: 3,4,2,1
Topic: kf.p36.r4 Partition: 8 Leader: 3 Replicas: 3,2,4,5 Isr: 3,2,4,5
Topic: kf.p36.r4 Partition: 9 Leader: 4 Replicas: 4,3,5,6 Isr: 3,6,4,5
Topic: kf.p36.r4 Partition: 10 Leader: 5 Replicas: 5,4,6,1 Isr: 4,5,6,1
Topic: kf.p36.r4 Partition: 11 Leader: 6 Replicas: 6,5,1,2 Isr: 5,2,6,1
Topic: kf.p36.r4 Partition: 12 Leader: 1 Replicas: 1,2,3,4 Isr: 3,4,2,1
Topic: kf.p36.r4 Partition: 13 Leader: 2 Replicas: 2,3,4,5 Isr: 3,2,4,5
Topic: kf.p36.r4 Partition: 14 Leader: 3 Replicas: 3,4,5,6 Isr: 3,6,4,5
Topic: kf.p36.r4 Partition: 15 Leader: 4 Replicas: 4,5,6,1 Isr: 4,5,6,1
Topic: kf.p36.r4 Partition: 16 Leader: 5 Replicas: 5,6,1,2 Isr: 5,2,6,1
Topic: kf.p36.r4 Partition: 17 Leader: 6 Replicas: 6,1,2,3 Isr: 3,2,6,1
Topic: kf.p36.r4 Partition: 18 Leader: 1 Replicas: 1,3,4,5 Isr: 3,4,5,1
Topic: kf.p36.r4 Partition: 19 Leader: 2 Replicas: 2,4,5,6 Isr: 6,2,4,5
Topic: kf.p36.r4 Partition: 20 Leader: 3 Replicas: 3,5,6,1 Isr: 3,5,6,1
Topic: kf.p36.r4 Partition: 21 Leader: 4 Replicas: 4,6,1,2 Isr: 4,2,6,1
Topic: kf.p36.r4 Partition: 22 Leader: 5 Replicas: 5,1,2,3 Isr: 3,5,2,1
Topic: kf.p36.r4 Partition: 23 Leader: 6 Replicas: 6,2,3,4 Isr: 3,6,2,4
Topic: kf.p36.r4 Partition: 24 Leader: 1 Replicas: 1,4,5,6 Isr: 4,5,6,1
Topic: kf.p36.r4 Partition: 25 Leader: 2 Replicas: 2,5,6,1 Isr: 1,6,2,5
Topic: kf.p36.r4 Partition: 26 Leader: 3 Replicas: 3,6,1,2 Isr: 3,2,6,1
Topic: kf.p36.r4 Partition: 27 Leader: 4 Replicas: 4,1,2,3 Isr: 3,4,2,1
Topic: kf.p36.r4 Partition: 28 Leader: 5 Replicas: 5,2,3,4 Isr: 3,2,4,5
Topic: kf.p36.r4 Partition: 29 Leader: 6 Replicas: 6,3,4,5 Isr: 3,6,4,5
Topic: kf.p36.r4 Partition: 30 Leader: 1 Replicas: 1,5,6,2 Isr: 5,2,6,1
Topic: kf.p36.r4 Partition: 31 Leader: 2 Replicas: 2,6,1,3 Isr: 1,3,6,2
Topic: kf.p36.r4 Partition: 32 Leader: 3 Replicas: 3,1,2,4 Isr: 3,4,2,1
Topic: kf.p36.r4 Partition: 33 Leader: 4 Replicas: 4,2,3,5 Isr: 3,2,4,5
Topic: kf.p36.r4 Partition: 34 Leader: 5 Replicas: 5,3,4,6 Isr: 3,6,4,5
Topic: kf.p36.r4 Partition: 35 Leader: 6 Replicas: 6,4,5,1 Isr: 4,5,6,1
总流量为240k tps,每条消息为1024字节。 当我运行240k tps流量时,起初一切都很好,但是一段时间后,出现了一些错误信息。
kafka-producer-perf-test --topic kf.p36.r4 --num-records 600000000 --record-size 1024 --throughput 120000 --producer-props bootstrap.servers=b3:9092,b4:9092,b5:9092,b6:9092,b7:9092,b8:9092 acks=1
我研究了卡夫卡经纪人的日志,发现经纪人和动物园管理员之间的通信有问题。
[root@g9csf002-0-0-1 ~]# kafka-producer-perf-test --topic kf.p36.r4 --num-records 600000000 --record-size 1024 --throughput 120000 --producer-props bootstrap.servers=b3:9092,b4:9092,b5:9092,b6:9092,b7:9092,b8:9092 acks=1
599506 records sent, 119901.2 records/sec (117.09 MB/sec), 4.8 ms avg latency, 147.0 max latency.
600264 records sent, 120052.8 records/sec (117.24 MB/sec), 2.0 ms avg latency, 13.0 max latency.
599584 records sent, 119916.8 records/sec (117.11 MB/sec), 1.9 ms avg latency, 13.0 max latency.
600760 records sent, 120152.0 records/sec (117.34 MB/sec), 1.9 ms avg latency, 13.0 max latency.
599764 records sent, 119904.8 records/sec (117.09 MB/sec), 2.0 ms avg latency, 35.0 max latency.
276603 records sent, 21408.9 records/sec (20.91 MB/sec), 103.0 ms avg latency, 10743.0 max latency.
org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.
org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.
org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.
org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.
org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.
zookeeper客户端是zookeeper-3.4.10.jar,我下载了代码并添加了一些日志到 src / java / main / org / apache / zookeeper / ClientCnxn.java
当访问变量 state 时,发现的 SendThread 可能会被阻止
[2018-08-06 01:28:02,562] WARN Client session timed out, have not heard from server in 7768ms for sessionid 0x164f8ea86020062 (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,562] INFO Client session timed out, have not heard from server in 7768ms for sessionid 0x164f8ea86020062, clo
您可以在 2018-08-06 01:27:56 和 2018-08-06 01:28:02 之间找到该线程被阻塞,什么也不做。 更改后的代码如下所示,
[2018-08-06 01:27:54,793] INFO ROVER: start of loop. (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:27:54,793] INFO ROVER: state = CONNECTED (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:27:54,793] INFO ROVER: to = 4000 (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:27:54,793] INFO ROVER: timeToNextPing = 2000 (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:27:54,793] INFO ROVER: before clientCnxnSocket.doTransport, to = 2000 (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:27:56,795] INFO ROVER: after clientCnxnSocket.doTransport (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: state = CONNECTED (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: start of loop. (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: state = CONNECTED (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: to = 1998 (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: timeToNextPing = -1002 (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: sendPing has done. (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: before clientCnxnSocket.doTransport, to = 1998 (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: after clientCnxnSocket.doTransport (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: state = CONNECTED (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: start of loop. (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: state = CONNECTED (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,561] INFO ROVER: to = -3768 (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,562] WARN Client session timed out, have not heard from server in 7768ms for sessionid 0x164f8ea86020062 (org.apache.zookeeper.ClientCnxn)
[2018-08-06 01:28:02,562] INFO Client session timed out, have not heard from server in 7768ms for sessionid 0x164f8ea86020062, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
安装的kafka是confluent-kafka-2.11, 而Java是
// If we are in read-only mode, seek for read/write server
if (state == States.CONNECTEDREADONLY) {
long now = System.currentTimeMillis();
int idlePingRwServer = (int) (now - lastPingRwServer);
if (idlePingRwServer >= pingRwTimeout) {
lastPingRwServer = now;
idlePingRwServer = 0;
pingRwTimeout =
Math.min(2*pingRwTimeout, maxPingRwTimeout);
pingRwServer();
}
to = Math.min(to, pingRwTimeout - idlePingRwServer);
}
LOG.info("ROVER: before clientCnxnSocket.doTransport, to = " + to );
clientCnxnSocket.doTransport(to, pendingQueue, outgoingQueue, ClientCnxn.this);
LOG.info("ROVER: after clientCnxnSocket.doTransport");
LOG.info("ROVER: state = " + state);
} catch (Throwable e) {
现在我不知道如何解决该问题,有人能对此有所了解吗?
答案 0 :(得分:0)
我之前曾遇到过这个问题,有时Kafka JVM会长时间垃圾收集,或者内部网络上发生了一些奇怪的事情。我注意到在我们的情况下,超时时间都在6秒或7秒左右(这与您的情况类似)。关键是,如果卡夫卡在规定的时间内无法向动物园管理员提供帮助,它就会开始报告复制不足的分区,从而频频关闭整个集群。因此,如果我没记错的话,我们将超时时间增加到15秒,然后一切正常,错误为零。
以下是kafka经纪人的核心设置:
zookeeper.session.timeout.ms Default: 6000ms
zookeeper.connection.timeout.ms
IIRC,我们都进行了更改,但是您应该首先尝试更改session
的配置。