将记录追加到主题

时间:2018-11-01 07:10:08

标签: apache-kafka ioexception kafka-topic

我正在尝试通过连接api消耗1000万行大小为(600MB)的csv文件。连接开始消耗完成370万条记录。此后,我得到以下错误。

[2018-11-01 07:28:49,889] ERROR Error while appending records to topic-test-0 in dir /tmp/kafka-logs (kafka.server.LogDirFailureChannel)
java.io.IOException: No space left on device
        at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
        at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:60)
        at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
        at sun.nio.ch.IOUtil.write(IOUtil.java:65)
        at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211)
        at org.apache.kafka.common.record.MemoryRecords.writeFullyTo(MemoryRecords.java:95)
        at org.apache.kafka.common.record.FileRecords.append(FileRecords.java:151)
        at kafka.log.LogSegment.append(LogSegment.scala:138)
        at kafka.log.Log.$anonfun$append$2(Log.scala:868)
        at kafka.log.Log.maybeHandleIOException(Log.scala:1837)
        at kafka.log.Log.append(Log.scala:752)
        at kafka.log.Log.appendAsLeader(Log.scala:722)
        at kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:634)
        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
        at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:257)
        at kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:622)
        at kafka.server.ReplicaManager.$anonfun$appendToLocalLog$2(ReplicaManager.scala:745)
        at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
        at scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:138)
        at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:236)
        at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:229)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
        at scala.collection.mutable.HashMap.foreach(HashMap.scala:138)
        at scala.collection.TraversableLike.map(TraversableLike.scala:234)
        at scala.collection.TraversableLike.map$(TraversableLike.scala:227)
        at scala.collection.AbstractTraversable.map(Traversable.scala:104)
        at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:733)
        at kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:472)
        at kafka.server.KafkaApis.handleProduceRequest(KafkaApis.scala:489)
        at kafka.server.KafkaApis.handle(KafkaApis.scala:106)
        at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
        at java.lang.Thread.run(Thread.java:748)
[2018-11-01 07:28:49,893] INFO [ReplicaManager broker=0] Stopping serving replicas in dir /tmp/kafka-logs (kafka.server.ReplicaManager)
[2018-11-01 07:28:49,897] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,topic-test-0,__consumer_offsets-25,__consumer_offsets 

我只有一个主题名称topic-test

机器规格:

  • OS:CentOs 7
  • Ram:16GB
  • HD:80GB

我看到一些博客在谈论log.dirs是server.property,但是目前尚不清楚该如何输入。我也要创建分区吗?我以为它是相同的数据文件而没有这样做。

1 个答案:

答案 0 :(得分:0)

ERROR将记录追加到dir / tmp / kafka-logs(kafka.server.LogDirFailureChannel)java.io.IOException中的topic-test-0时出错,设备上没有剩余空间 当您在kafka主题中使用巨大的文件或流时,就会出现这种情况。 转到默认日志目录/ tmp / kafka-logs 然后,

[root@ENT-CL-015243 kafka-logs]# df -h
Filesystem                          Size  Used Avail Use% Mounted on
/dev/mapper/vg_rhel6u4x64-lv_root   61G   8.4G   49G  15% /
tmpfs                    7.7G     0  7.7G   0% /dev/shm
/dev/sda1                    485M   37M  423M   9% /boot
/dev/mapper/vg_rhel6u4x64-lv_home   2.0G   68M  1.9G   4% /home
/dev/mapper/vg_rhel6u4x64-lv_tmp    4.0G  315M  3.5G   9% /tmp
/dev/mapper/vg_rhel6u4x64-lv_var    7.9G  252M  7.3G   4% /var  

如您所见,在我的案例中,只有3.5Gb的/ tmp空间可用,而我正面临这个问题。我在根目录中创建了一个/ klogs并在 server.properties

中更改了 log.dirs = / klogs / kafka-logs