broker-list和bootstrap服务器有什么区别?

时间:2018-05-13 03:44:12

标签: apache-kafka

Kafka有什么区别

broker-listbootstrap servers

3 个答案:

答案 0 :(得分:0)

这个答案仅供参考,因为我没有使用--broker-list,所以我很困惑,然后我意识到它已被弃用。

当前我正在使用Kafka 2.6.0版。

现在对于生产者和消费者,我们都必须使用--bootstrap-server而不是--broker-list,因为现在已弃用它。

您可以在Kafka控制台脚本中进行检查。

bin / kafka-console-producer.sh

enter image description here

如您所见,-不推荐使用Kafka-console-producer.sh的经纪人列表

bin / kafka-console-consumer.sh

enter image description here

答案 1 :(得分:0)

其他人已经很好地回答了,我只是想在这里分享一些额外的信息。

bin 目录下的那些命令行工具没有详细说明。

当然,您可以调用 --help 来打印给定命令支持的语法和选项的描述。

例如:bin/kafka-console-producer.sh --help

--bootstrap-server <String: server to    REQUIRED unless --broker-list
  connect to>                              (deprecated) is specified. The server
                                           (s) to connect to. The broker list
                                           string in the form HOST1:PORT1,HOST2:
                                           PORT2.
--broker-list <String: broker-list>      DEPRECATED, use --bootstrap-server
                                           instead; ignored if --bootstrap-
                                           server is specified.  The broker
                                           list string in the form HOST1:PORT1,
                                           HOST2:PORT2.

但是你总是可以直接从源代码中找到最新的信息,而不是运行命令,就在目录core/src/main/scala/kafka中,相应的scala类可以在{ {1}} 目录,或 tools 目录。

例如:kafka-console-producer.sh 脚本实际上从 ConsoleProducer.scala 类调用函数。在那里您可以轻松找到broker-list 已弃用

祝您阅读源代码愉快:)

答案 2 :(得分:-1)

我也讨厌阅读&#34;文字墙,如&#34;卡夫卡文件:P
据我了解:

  • 代理列表

    • 完整的服务器列表,如果任何缺少的生产者可能无法正常工作
    • 与生产者命令相关
  • bootstrap-servers

    • 一个人足以发现所有其他人
    • 与消费者命令相关
    • Zookeeper参与

很抱歉这样......简短。下次我将更多地关注细节,以便更清楚。 为了解释我的观点,我将使用Kafka 1.0.1控制台脚本。

kafka-console-consumer.sh

The console consumer is a tool that reads data from Kafka and outputs it to standard output.
Option                                   Description
------                                   -----------
--blacklist <String: blacklist>          Blacklist of topics to exclude from
                                           consumption.
--bootstrap-server <String: server to    REQUIRED (unless old consumer is
  connect to>                              used): The server to connect to.
--consumer-property <String:             A mechanism to pass user-defined
  consumer_prop>                           properties in the form key=value to
                                           the consumer.
--consumer.config <String: config file>  Consumer config properties file. Note
                                           that [consumer-property] takes
                                           precedence over this config.
--csv-reporter-enabled                   If set, the CSV metrics reporter will
                                           be enabled
--delete-consumer-offsets                If specified, the consumer path in
                                           zookeeper is deleted when starting up
--enable-systest-events                  Log lifecycle events of the consumer
                                           in addition to logging consumed
                                           messages. (This is specific for
                                           system tests.)
--formatter <String: class>              The name of a class to use for
                                           formatting kafka messages for
                                           display. (default: kafka.tools.
                                           DefaultMessageFormatter)
--from-beginning                         If the consumer does not already have
                                           an established offset to consume
                                           from, start with the earliest
                                           message present in the log rather
                                           than the latest message.
--group <String: consumer group id>      The consumer group id of the consumer.
--isolation-level <String>               Set to read_committed in order to
                                           filter out transactional messages
                                           which are not committed. Set to
                                           read_uncommittedto read all
                                           messages. (default: read_uncommitted)
--key-deserializer <String:
  deserializer for key>
--max-messages <Integer: num_messages>   The maximum number of messages to
                                           consume before exiting. If not set,
                                           consumption is continual.
--metrics-dir <String: metrics           If csv-reporter-enable is set, and
  directory>                               this parameter isset, the csv
                                           metrics will be output here
--new-consumer                           Use the new consumer implementation.
                                           This is the default, so this option
                                           is deprecated and will be removed in
                                           a future release.
--offset <String: consume offset>        The offset id to consume from (a non-
                                           negative number), or 'earliest'
                                           which means from beginning, or
                                           'latest' which means from end
                                           (default: latest)
--partition <Integer: partition>         The partition to consume from.
                                           Consumption starts from the end of
                                           the partition unless '--offset' is
                                           specified.
--property <String: prop>                The properties to initialize the
                                           message formatter.
--skip-message-on-error                  If there is an error when processing a
                                           message, skip it instead of halt.
--timeout-ms <Integer: timeout_ms>       If specified, exit if no message is
                                           available for consumption for the
                                           specified interval.
--topic <String: topic>                  The topic id to consume on.
--value-deserializer <String:
  deserializer for values>
--whitelist <String: whitelist>          Whitelist of topics to include for
                                           consumption.
--zookeeper <String: urls>               REQUIRED (only when using old
                                           consumer): The connection string for
                                           the zookeeper connection in the form
                                           host:port. Multiple URLS can be
                                           given to allow fail-over.

kafka-console-producer.sh
Read data from standard input and publish it to Kafka.
Option                                   Description
------                                   -----------
--batch-size <Integer: size>             Number of messages to send in a single
                                           batch if they are not being sent
                                           synchronously. (default: 200)
--broker-list <String: broker-list>      REQUIRED: The broker list string in
                                           the form HOST1:PORT1,HOST2:PORT2.
--compression-codec [String:             The compression codec: either 'none',
  compression-codec]                       'gzip', 'snappy', or 'lz4'.If
                                           specified without value, then it
                                           defaults to 'gzip'
--key-serializer <String:                The class name of the message encoder
  encoder_class>                           implementation to use for
                                           serializing keys. (default: kafka.
                                           serializer.DefaultEncoder)
--line-reader <String: reader_class>     The class name of the class to use for
                                           reading lines from standard in. By
                                           default each line is read as a
                                           separate message. (default: kafka.
                                           tools.
                                           ConsoleProducer$LineMessageReader)
--max-block-ms <Long: max block on       The max time that the producer will
  send>                                    block for during a send request
                                           (default: 600

正如您所看到的,bootstrap-server参数仅针对使用者而发生。另一方面 - broker-list仅在生产者的参数列表中。

此外:

kafka-console-consumer.sh --zookeeper localost:2181 --topic bets
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].

因为cricket-007注意到bootstrap-server和zookeeper看起来有类似的目的。区别在于--zookeeper应指向另一侧的Zookeeper节点--bootstrap-server指向Kafka节点和端口。

重申,bootstrap-server被用作使用者参数,broker-list被用作producer参数。

相关问题