kafka启动,但关机后不会重新启动

时间:2018-10-07 01:52:15

标签: apache-kafka

ubuntu desktop 16.04下,我使用confluent安装了apt开源软件。

我运行zookeeper

然后我运行kafka

idf@DESKTOP-QVGBOPK:~$ sudo kafka-server-start /etc/kafka/server.properties
[2018-10-07 01:48:12,378] INFO KafkaConfig values:
        advertised.host.name = null
        metric.reporters = []
        quota.producer.default = 9223372036854775807
        offsets.topic.num.partitions = 50
        log.flush.interval.messages = 9223372036854775807
        auto.create.topics.enable = true
        controller.socket.timeout.ms = 30000
        log.flush.interval.ms = null
        principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
        replica.socket.receive.buffer.bytes = 65536
        min.insync.replicas = 1
        replica.fetch.wait.max.ms = 500
        num.recovery.threads.per.data.dir = 1
        ssl.keystore.type = JKS
        sasl.mechanism.inter.broker.protocol = GSSAPI
        default.replication.factor = 1
        ssl.truststore.password = null
        log.preallocate = false
        sasl.kerberos.principal.to.local.rules = [DEFAULT]
        fetch.purgatory.purge.interval.requests = 1000
        ssl.endpoint.identification.algorithm = null
        replica.socket.timeout.ms = 30000
        message.max.bytes = 1000012
        num.io.threads = 8
        offsets.commit.required.acks = -1
        log.flush.offset.checkpoint.interval.ms = 60000
        delete.topic.enable = false
        quota.window.size.seconds = 1
        ssl.truststore.type = JKS
        offsets.commit.timeout.ms = 5000
        quota.window.num = 11
        zookeeper.connect = localhost:2181
        authorizer.class.name =
        num.replica.fetchers = 1
        log.retention.ms = null
        log.roll.jitter.hours = 0
        log.cleaner.enable = true
        offsets.load.buffer.size = 5242880
        log.cleaner.delete.retention.ms = 86400000
        ssl.client.auth = none
        controlled.shutdown.max.retries = 3
        queued.max.requests = 500
        offsets.topic.replication.factor = 3
        log.cleaner.threads = 1
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        socket.request.max.bytes = 104857600
        ssl.trustmanager.algorithm = PKIX
        zookeeper.session.timeout.ms = 6000
        log.retention.bytes = -1
        log.message.timestamp.type = CreateTime
        sasl.kerberos.min.time.before.relogin = 60000
        zookeeper.set.acl = false
        connections.max.idle.ms = 600000
        offsets.retention.minutes = 1440
        replica.fetch.backoff.ms = 1000
        inter.broker.protocol.version = 0.10.0-IV1
        log.retention.hours = 168
        num.partitions = 1
        broker.id.generation.enable = true
        listeners = null
        ssl.provider = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        log.roll.ms = null
        log.flush.scheduler.interval.ms = 9223372036854775807
        ssl.cipher.suites = null
        log.index.size.max.bytes = 10485760
        ssl.keymanager.algorithm = SunX509
        security.inter.broker.protocol = PLAINTEXT
        replica.fetch.max.bytes = 1048576
        advertised.port = null
        log.cleaner.dedupe.buffer.size = 134217728
        replica.high.watermark.checkpoint.interval.ms = 5000
        log.cleaner.io.buffer.size = 524288
        sasl.kerberos.ticket.renew.window.factor = 0.8
        zookeeper.connection.timeout.ms = 6000
        controlled.shutdown.retry.backoff.ms = 5000
        log.roll.hours = 168
        log.cleanup.policy = delete
        host.name =
        log.roll.jitter.ms = null
        max.connections.per.ip = 2147483647
        offsets.topic.segment.bytes = 104857600
        background.threads = 10
        quota.consumer.default = 9223372036854775807
        request.timeout.ms = 30000
        log.message.format.version = 0.10.0-IV1
        log.index.interval.bytes = 4096
        log.dir = /tmp/kafka-logs
        log.segment.bytes = 1073741824
        log.cleaner.backoff.ms = 15000
        offset.metadata.max.bytes = 4096
        ssl.truststore.location = null
        group.max.session.timeout.ms = 300000
        ssl.keystore.password = null
        zookeeper.sync.time.ms = 2000
        port = 9092
        log.retention.minutes = null
        log.segment.delete.delay.ms = 60000
        log.dirs = /var/lib/kafka
        controlled.shutdown.enable = true
        compression.type = producer
        max.connections.per.ip.overrides =
        log.message.timestamp.difference.max.ms = 9223372036854775807
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
        auto.leader.rebalance.enable = true
        leader.imbalance.check.interval.seconds = 300
        log.cleaner.min.cleanable.ratio = 0.5
        replica.lag.time.max.ms = 10000
        num.network.threads = 3
        ssl.key.password = null
        reserved.broker.max.id = 1000
        metrics.num.samples = 2
        socket.send.buffer.bytes = 102400
        ssl.protocol = TLS
        socket.receive.buffer.bytes = 102400
        ssl.keystore.location = null
        replica.fetch.min.bytes = 1
        broker.rack = null
        unclean.leader.election.enable = true
        sasl.enabled.mechanisms = [GSSAPI]
        group.min.session.timeout.ms = 6000
        log.cleaner.io.buffer.load.factor = 0.9
        offsets.retention.check.interval.ms = 600000
        producer.purgatory.purge.interval.requests = 1000
        metrics.sample.window.ms = 30000
        broker.id = 0
        offsets.topic.compression.codec = 0
        log.retention.check.interval.ms = 300000
        advertised.listeners = null
        leader.imbalance.per.broker.percentage = 10
 (kafka.server.KafkaConfig)
[2018-10-07 01:48:12,540] WARN Please note that the support metrics collection feature ("Metrics") of Proactive Support is enabled.  With Metrics enabled, this broker is configured to collect and report certain broker and cluster metadata ("Metadata") about your use of the Confluent Platform 2.0 (including without limitation, your remote internet protocol address) to Confluent, Inc. ("Confluent") or its parent, subsidiaries, affiliates or service providers every 24hours.  This Metadata may be transferred to any country in which Confluent maintains facilities.  For a more in depth discussion of how Confluent processes such information, please read our Privacy Policy located at http://www.confluent.io/privacy. By proceeding with `confluent.support.metrics.enable=true`, you agree to all such collection, transfer, storage and use of Metadata by Confluent.  You can turn the Metrics feature off by setting `confluent.support.metrics.enable=false` in the broker configuration and restarting the broker.  See the Confluent Platform documentation for further information. (io.confluent.support.metrics.SupportedServerStartable)
[2018-10-07 01:48:12,543] INFO starting (kafka.server.KafkaServer)
[2018-10-07 01:48:12,554] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2018-10-07 01:48:12,569] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2018-10-07 01:48:12,577] INFO Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,578] INFO Client environment:host.name=DESKTOP-QVGBOPK.localdomain (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,580] INFO Client environment:java.version=1.8.0_181 (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,581] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,584] INFO Client environment:java.home=/usr/lib/jvm/java-8-oracle/jre (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,585] INFO Client environment:java.class.path=:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.4.0-b34.jar:/usr/bin/../share/java/kafka/argparse4j-0.5.0.jar:/usr/bin/../share/java/kafka/avro-1.7.7.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.8.3.jar:/usr/bin/../share/java/kafka/commons-codec-1.9.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.1.jar:/usr/bin/../share/java/kafka/commons-compress-1.4.1.jar:/usr/bin/../share/java/kafka/commons-digester-1.8.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.1.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/commons-validator-1.4.1.jar:/usr/bin/../share/java/kafka/connect-api-0.10.0.1-cp1.jar:/usr/bin/../share/java/kafka/connect-file-0.10.0.1-cp1.jar:/usr/bin/../share/java/kafka/connect-json-0.10.0.1-cp1.jar:/usr/bin/../share/java/kafka/connect-runtime-0.10.0.1-cp1.jar:/usr/bin/../share/java/kafka/guava-18.0.jar:/usr/bin/../share/java/kafka/hk2-api-2.4.0-b34.jar:/usr/bin/../share/java/kafka/hk2-locator-2.4.0-b34.jar:/usr/bin/../share/java/kafka/hk2-utils-2.4.0-b34.jar:/usr/bin/../share/java/kafka/httpclient-4.5.1.jar:/usr/bin/../share/java/kafka/httpcore-4.4.3.jar:/usr/bin/../share/java/kafka/httpmime-4.5.1.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.6.0.jar:/usr/bin/../share/java/kafka/jackson-core-2.6.3.jar:/usr/bin/../share/java/kafka/jackson-core-asl-1.9.13.jar:/usr/bin/../share/java/kafka/jackson-databind-2.6.3.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.6.3.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.6.3.jar:/usr/bin/../share/java/kafka/jackson-mapper-asl-1.9.13.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.6.3.jar:/usr/bin/../share/java/kafka/javassist-3.18.2-GA.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.2.jar:/usr/bin/../share/java/kafka/javax.inject-1.jar:/usr/bin/../share/java/kafka/javax.inject-2.4.0-b34.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.0.1.jar:/usr/bin/../share/java/kafka/jersey-client-2.22.2.jar:/usr/bin/../share/java/kafka/jersey-common-2.22.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.22.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.22.2.jar:/usr/bin/../share/java/kafka/jersey-guava-2.22.2.jar:/usr/bin/../share/java/kafka/jersey-media-jaxb-2.22.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.22.2.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/jetty-http-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/jetty-io-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/jetty-security-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/jetty-server-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/jetty-util-9.2.15.v20160210.jar:/usr/bin/../share/java/kafka/jopt-simple-4.9.jar:/usr/bin/../share/java/kafka/kafka-clients-0.10.0.1-cp1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-0.10.0.1-cp1.jar:/usr/bin/../share/java/kafka/kafka-streams-0.10.0.1-cp1.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-0.10.0.1-cp1.jar:/usr/bin/../share/java/kafka/kafka-tools-0.10.0.1-cp1.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka_2.11-0.10.0.1-cp1-javadoc.jar:/usr/bin/../share/java/kafka/kafka_2.11-0.10.0.1-cp1-scaladoc.jar:/usr/bin/../share/java/kafka/kafka_2.11-0.10.0.1-cp1-sources.jar:/usr/bin/../share/java/kafka/kafka_2.11-0.10.0.1-cp1-test-sources.jar:/usr/bin/../share/java/kafka/kafka_2.11-0.10.0.1-cp1-test.jar:/usr/bin/../share/java/kafka/kafka_2.11-0.10.0.1-cp1.jar:/usr/bin/../share/java/kafka/log4j-1.2.17.jar:/usr/bin/../share/java/kafka/lz4-1.3.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.1.jar:/usr/bin/../share/java/kafka/paranamer-2.3.jar:/usr/bin/../share/java/kafka/reflections-0.9.10.jar:/usr/bin/../share/java/kafka/rocksdbjni-4.8.0.jar:/usr/bin/../share/java/kafka/scala-library-2.11.8.jar:/usr/bin/../share/java/kafka/scala-parser-combinators_2.11-1.0.4.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.21.jar:/usr/bin/../share/java/kafka/slf4j-log4j12-1.7.21.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.2.6.jar:/usr/bin/../share/java/kafka/support-metrics-client-3.0.1.jar:/usr/bin/../share/java/kafka/support-metrics-common-3.0.1.jar:/usr/bin/../share/java/kafka/validation-api-1.1.0.Final.jar:/usr/bin/../share/java/kafka/xz-1.0.jar:/usr/bin/../share/java/kafka/zkclient-0.8.jar:/usr/bin/../share/java/kafka/zookeeper-3.4.6.jar:/usr/bin/../share/java/confluent-support-metrics/support-metrics-fullcollector-3.0.1.jar:/usr/share/java/confluent-support-metrics/support-metrics-fullcollector-3.0.1.jar (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,587] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,588] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,592] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,593] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,595] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,596] INFO Client environment:os.version=4.4.0-17134-Microsoft (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,597] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,597] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,598] INFO Client environment:user.dir=/home/idf (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,600] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@35cabb2a (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:48:12,618] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
[2018-10-07 01:48:12,621] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2018-10-07 01:48:12,629] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2018-10-07 01:48:12,643] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1664b569420002b, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2018-10-07 01:48:12,647] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
[2018-10-07 01:48:12,831] INFO Loading logs. (kafka.log.LogManager)
[2018-10-07 01:48:12,838] INFO Logs loading complete. (kafka.log.LogManager)
[2018-10-07 01:48:13,015] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2018-10-07 01:48:13,018] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2018-10-07 01:48:13,025] WARN No meta.properties file under dir /var/lib/kafka/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2018-10-07 01:48:13,068] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2018-10-07 01:48:13,073] INFO [Socket Server on Broker 0], Started 1 acceptor threads (kafka.network.SocketServer)
[2018-10-07 01:48:13,094] INFO [ExpirationReaper-0], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2018-10-07 01:48:13,096] INFO [ExpirationReaper-0], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2018-10-07 01:48:13,136] INFO Creating /controller (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
[2018-10-07 01:48:13,150] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
[2018-10-07 01:48:13,152] INFO 0 successfully elected as leader (kafka.server.ZookeeperLeaderElector)
[2018-10-07 01:48:13,318] INFO New leader is 0 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
[2018-10-07 01:48:13,319] INFO [ExpirationReaper-0], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2018-10-07 01:48:13,321] INFO [ExpirationReaper-0], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2018-10-07 01:48:13,332] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.GroupCoordinator)
[2018-10-07 01:48:13,334] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.GroupCoordinator)
[2018-10-07 01:48:13,339] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 9 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2018-10-07 01:48:13,362] INFO [ThrottledRequestReaper-Produce], Starting  (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2018-10-07 01:48:13,364] INFO [ThrottledRequestReaper-Fetch], Starting  (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2018-10-07 01:48:13,371] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
[2018-10-07 01:48:13,396] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
[2018-10-07 01:48:13,410] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
[2018-10-07 01:48:13,412] INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT -> EndPoint(DESKTOP-QVGBOPK.localdomain,9092,PLAINTEXT) (kafka.utils.ZkUtils)
[2018-10-07 01:48:13,415] WARN No meta.properties file under dir /var/lib/kafka/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2018-10-07 01:48:13,436] INFO Kafka version : 0.10.0.1-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2018-10-07 01:48:13,437] INFO Kafka commitId : e7288edd541cee03 (org.apache.kafka.common.utils.AppInfoParser)
[2018-10-07 01:48:13,441] INFO [Kafka Server 0], started (kafka.server.KafkaServer)
[2018-10-07 01:48:13,446] INFO Waiting 10064 ms for the monitored broker to finish starting up... (io.confluent.support.metrics.MetricsReporter)
[2018-10-07 01:48:13,588] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [_schemas,0],[test,0],[TutorialTopic,0],[__confluent.support.metrics,0] (kafka.server.ReplicaFetcherManager)
[2018-10-07 01:48:13,615] INFO Completed load of log _schemas-0 with log end offset 0 (kafka.log.Log)
[2018-10-07 01:48:13,618] INFO Created log for partition [_schemas,0] in /var/lib/kafka with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-10-07 01:48:13,620] INFO Partition [_schemas,0] on broker 0: No checkpointed highwatermark is found for partition [_schemas,0] (kafka.cluster.Partition)
[2018-10-07 01:48:13,645] INFO Completed load of log test-0 with log end offset 0 (kafka.log.Log)
[2018-10-07 01:48:13,648] INFO Created log for partition [test,0] in /var/lib/kafka with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> delete, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-10-07 01:48:13,649] INFO Partition [test,0] on broker 0: No checkpointed highwatermark is found for partition [test,0] (kafka.cluster.Partition)
[2018-10-07 01:48:13,655] INFO Completed load of log TutorialTopic-0 with log end offset 0 (kafka.log.Log)
[2018-10-07 01:48:13,657] INFO Created log for partition [TutorialTopic,0] in /var/lib/kafka with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> delete, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-10-07 01:48:13,659] INFO Partition [TutorialTopic,0] on broker 0: No checkpointed highwatermark is found for partition [TutorialTopic,0] (kafka.cluster.Partition)
[2018-10-07 01:48:13,665] INFO Completed load of log __confluent.support.metrics-0 with log end offset 0 (kafka.log.Log)
[2018-10-07 01:48:13,667] INFO Created log for partition [__confluent.support.metrics,0] in /var/lib/kafka with properties {compression.type -> producer, message.format.version -> 0.10.0-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> delete, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 31536000000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-10-07 01:48:13,670] INFO Partition [__confluent.support.metrics,0] on broker 0: No checkpointed highwatermark is found for partition [__confluent.support.metrics,0] (kafka.cluster.Partition)
[2018-10-07 01:48:13,686] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [_schemas,0],[test,0],[TutorialTopic,0],[__confluent.support.metrics,0] (kafka.server.ReplicaFetcherManager)
[2018-10-07 01:48:23,519] INFO Monitored broker is now ready (io.confluent.support.metrics.MetricsReporter)
[2018-10-07 01:48:23,525] INFO Starting metrics collection from monitored broker... (io.confluent.support.metrics.MetricsReporter)

如果我现在control-ckafka,然后尝试重新启动,我得到:

idf@DESKTOP-QVGBOPK:~$ sudo kafka-server-start /etc/kafka/server.properties

  ....
 (kafka.server.KafkaConfig)
    [2018-10-07 01:52:44,565] INFO Loading logs. (kafka.log.LogManager)
[2018-10-07 01:52:44,596] WARN Found a corrupted index file, /var/lib/kafka/TutorialTopic-0/00000000000000000000.index, deleting and rebuilding index... (kafka.log.Log)
[2018-10-07 01:52:44,611] ERROR There was an error in one of the threads during logs loading: java.io.IOException: Invalid argument (kafka.log.LogManager)
[2018-10-07 01:52:44,614] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.io.IOException: Invalid argument
        at java.io.RandomAccessFile.setLength(Native Method)
        at kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:294)
        at kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:285)
        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
        at kafka.log.OffsetIndex.resize(OffsetIndex.scala:285)
        at kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:274)
        at kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
        at kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:274)
        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
        at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:273)
        at kafka.log.LogSegment.recover(LogSegment.scala:202)
        at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:199)
        at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:171)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
        at kafka.log.Log.loadSegments(Log.scala:171)
        at kafka.log.Log.<init>(Log.scala:101)
        at kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$3$$anonfun$apply$10$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:152)
        at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:56)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
[2018-10-07 01:52:44,618] WARN Found a corrupted index file, /var/lib/kafka/__confluent.support.metrics-0/00000000000000000000.index, deleting and rebuilding index... (kafka.log.Log)
[2018-10-07 01:52:44,647] INFO shutting down (kafka.server.KafkaServer)
[2018-10-07 01:52:44,654] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2018-10-07 01:52:44,663] INFO Session: 0x1664b569420002c closed (org.apache.zookeeper.ZooKeeper)
[2018-10-07 01:52:44,663] WARN Found a corrupted index file, /var/lib/kafka/_schemas-0/00000000000000000000.index, deleting and rebuilding index... (kafka.log.Log)
[2018-10-07 01:52:44,663] INFO EventThread shut down (org.apache.zookeeper.ClientCnxn)
[2018-10-07 01:52:44,666] INFO shut down completed (kafka.server.KafkaServer)
[2018-10-07 01:52:44,671] INFO shutting down (kafka.server.KafkaServer)
idf@DESKTOP-QVGBOPK:~$

如果我删除了/var/lib/kafka/中的文件[包括隐藏文件],并从/tmp中删除了文件,然后再次重新启动kafka,那似乎没问题。但是无论如何,上述问题仍然存在。

这是kafka-topics

的输出
idf@DESKTOP-QVGBOPK:/etc/kafka$ kafka-topics --zookeeper 127.0.0.1:2181 --describe
Topic:TutorialTopic     PartitionCount:1        ReplicationFactor:1     Configs:
        Topic: TutorialTopic    Partition: 0    Leader: 0       Replicas: 0     Isr: 0
Topic:__confluent.support.metrics       PartitionCount:1        ReplicationFactor:1     Configs:retention.ms=31536000000
        Topic: __confluent.support.metrics      Partition: 0    Leader: 0       Replicas: 0     Isr: 0
Topic:_schemas  PartitionCount:1        ReplicationFactor:1     Configs:cleanup.policy=compact
        Topic: _schemas Partition: 0    Leader: 0       Replicas: 0     Isr: 0
Topic:test      PartitionCount:1        ReplicationFactor:1     Configs:
        Topic: test     Partition: 0    Leader: 0       Replicas: 0     Isr: 0
idf@DESKTOP-QVGBOPK:/etc/kafka$

0 个答案:

没有答案
相关问题