使apache storm拓扑使用来自kafka的最新偏移量

时间:2018-05-26 11:05:11

标签: apache-kafka apache-storm

我有一个kafkaspout,2个用于处理数据的螺栓,2个用于在mongodb中存储已处理数据的螺栓

我正在使用apache flux来创建拓扑,我正在从kafka将数据读入spout。一切运行正常,但每次运行拓扑时,它都会从头开始处理kafka中的所有消息。 一旦它处理完所有的消息,它就不会等待更多的消息和崩溃。

Error Message

如何使风暴拓扑仅处理最新的消息。

这是我的拓扑文件.yaml

name: "kafka-topology"

components:
# MongoDB mapper
  - id: "block-mapper"
    className: "org.apache.storm.mongodb.common.mapper.SimpleMongoMapper"
    configMethods:
      - name: "withFields"
        args: # The following are the tuple fields to map to a MongoDB document
          - ["block"]
# MongoDB mapper
  - id: "transaction-mapper"
    className: "org.apache.storm.mongodb.common.mapper.SimpleMongoMapper"
    configMethods:
      - name: "withFields"
        args: # The following are the tuple fields to map to a MongoDB document
          - ["transaction"]

  - id: "stringScheme"
    className: "org.apache.storm.kafka.StringScheme"

  - id: "stringMultiScheme"
    className: "org.apache.storm.spout.SchemeAsMultiScheme"
    constructorArgs:
      - ref: "stringScheme"

  - id: "zkHosts"
    className: "org.apache.storm.kafka.ZkHosts"
    constructorArgs:
      - "172.25.33.191:2181"

  - id: "spoutConfig"
    className: "org.apache.storm.kafka.SpoutConfig"
    constructorArgs:
      # brokerHosts
      - ref: "zkHosts"
      # topic
      - "blockdata"
      # zkRoot
      - ""
      # id
      - "myId"
    properties:
      - name: "scheme"
        ref: "stringMultiScheme"
      - name: "ignoreZkOffsets"
        value: flase



config:
  topology.workers: 1
  # ...

# spout definitions
spouts:
  - id: "kafka-spout"
    className: "org.apache.storm.kafka.KafkaSpout"
    constructorArgs:
      - ref: "spoutConfig"
    parallelism: 1

# bolt definitions
bolts:
  - id: "blockprocessing-bolt"
    className: "org.apache.storm.flux.wrappers.bolts.FluxShellBolt"
    constructorArgs:
      # command line
      - ["python", "process-bolt.py"]
      # output fields
      - ["block"]
    parallelism: 1
    # ...
  - id: "transprocessing-bolt"
    className: "org.apache.storm.flux.wrappers.bolts.FluxShellBolt"
    constructorArgs:
      # command line
      - ["python", "trans-bolt.py"]
      # output fields
      - ["transaction"]
    parallelism: 1
    # ...

  - id: "mongoBlock-bolt"
    className: "org.apache.storm.mongodb.bolt.MongoInsertBolt"
    constructorArgs:
      - "mongodb://172.25.33.205:27017/testdb"
      - "block"
      - ref: "block-mapper"
    parallelism: 1
    # ...

  - id: "mongoTrans-bolt"
    className: "org.apache.storm.mongodb.bolt.MongoInsertBolt"
    constructorArgs:
      - "mongodb://172.25.33.205:27017/testdb"
      - "transaction"
      - ref: "transaction-mapper"
    parallelism: 1
    # ...



  - id: "log"
    className: "org.apache.storm.flux.wrappers.bolts.LogInfoBolt"
    parallelism: 1
    # ...

#stream definitions
# stream definitions define connections between spouts and bolts.
# note that such connections can be cyclical
# custom stream groupings are also supported

streams:


  - name: "kafka --> block-Processing" # name isn't used (placeholder for logging, UI, etc.)
    from: "kafka-spout"
    to: "blockprocessing-bolt"
    grouping:
      type: SHUFFLE

  - name: "kafka --> transaction-processing" # name isn't used (placeholder for logging, UI, etc.)
    from: "kafka-spout"
    to: "transprocessing-bolt"
    grouping:
      type: SHUFFLE

  - name: "block --> mongo"
    from: "blockprocessing-bolt"
    to: "mongoBlock-bolt"
    grouping:
      type: SHUFFLE

  - name: "transaction --> mongo"
    from: "transprocessing-bolt"
    to: "mongoTrans-bolt"
    grouping:
      type: SHUFFLE

我尝试将属性添加到spoutconfig,以便像这样

获取最新的msgs
 - id: "spoutConfig"
className: "org.apache.storm.kafka.SpoutConfig"
constructorArgs:
  - ref: "zkHosts"
  - "blockdata"
  - ""
  - "myId"
properties:
  - name: "scheme"
    ref: "stringMultiScheme"
  - name: "startOffsetTime"
    ref: "EarliestTime"

  - name: "forceFromStart"
    value: false

但无论我在startOffsetTime的参考中放置什么,它都会出现错误

Exception in thread "main" java.lang.IllegalArgumentException: Can not set long field org.apache.storm.kafka.KafkaConfig.startOffsetTime to null value

1 个答案:

答案 0 :(得分:1)

您需要将startOffsetTime设置为kafka.api.OffsetRequest.LatestTime。正如您在https://github.com/apache/storm/tree/64af629a19a82591dbf3428f7fd6b02f39e0723f/external/storm-kafka#kafkaconfig所看到的,默认设置将转到可用的最早偏移量。

你遇到的例外情况似乎无关紧要。它看起来像Curator / Zookeeper不兼容。

编辑:我认为你正在解决这个问题https://issues.apache.org/jira/browse/STORM-2978。 1.2.2应尽快出来,请在发布后尝试升级。

编辑编辑:如果您想在不升级的情况下解决它,请编辑拓扑的pom,使其包含对Zookeeper 3.4的依赖,而不是3.5。