为什么Apache Storm KafkaSpout从Kafka话题中发出了这么多项目?

时间:2017-05-19 15:22:07

标签: apache-kafka apache-storm

我遇到了卡夫卡和暴风雨的问题。我现在还不确定它是否是我正在设置的KafkaSpout配置的问题,或者我是否正确地确认或是什么。

我在我的Kafka话题中列出了50个项目,但是我的鲸鱼喷出了超过1300个(和计数)元组。此外,Spout报告说几乎所有人都失败了。"拓扑结构实际上并没有失败,它成功地写入了数据库,但我不知道为什么它显然会重播所有内容(如果这就是它正在做的事情)

最大的问题是:

为什么当我把50分传给卡夫卡时它会发出这么多元组?

enter image description here

以下是我如何设置拓扑和KafkaSpout

  public static void main(String[] args) {
    try {
      String databaseServerIP = "";
      String kafkaZookeepers = "";
      String kafkaTopicName = "";
      int numWorkers = 1;
      int numAckers = 1;
      int numSpouts = 1;
      int numBolts = 1;
      int messageTimeOut = 10;
      String topologyName = "";

      if (args == null || args[0].isEmpty()) {
        System.out.println("Args cannot be null or empty. Exiting");
        return;
      } else {
        if (args.length == 8) {
          for (String arg : args) {
            if (arg == null) {
              System.out.println("Parameters cannot be null. Exiting");
              return;
            }
          }
          databaseServerIP = args[0];
          kafkaZookeepers = args[1];
          kafkaTopicName = args[2];
          numWorkers = Integer.valueOf(args[3]);
          numAckers = Integer.valueOf(args[4]);
          numSpouts = Integer.valueOf(args[5]);
          numBolts = Integer.valueOf(args[6]);
          topologyName = args[7];
        } else {
          System.out.println("Bad parameters: found " + args.length + ", required = 8");
          return;
        }
      }

      Config conf = new Config();

      conf.setNumWorkers(numWorkers);
      conf.setNumAckers(numAckers);
      conf.setMessageTimeoutSecs(messageTimeOut);

      conf.put("databaseServerIP", databaseServerIP);
      conf.put("kafkaZookeepers", kafkaZookeepers);
      conf.put("kafkaTopicName", kafkaTopicName);

      /**
       * Now would put kafkaSpout instance below instead of TemplateSpout()
       */
      TopologyBuilder builder = new TopologyBuilder();
      builder.setSpout(topologyName + "-flatItems-from-kafka-spout", getKafkaSpout(kafkaZookeepers, kafkaTopicName), numSpouts);
      builder.setBolt(topologyName + "-flatItem-Writer-Bolt", new ItemWriterBolt(), numBolts).shuffleGrouping(topologyName + "-flatItems-from-kafka-spout");


      StormTopology topology = builder.createTopology();

      StormSubmitter.submitTopology(topologyName, conf, topology);

    } catch (Exception e) {
      System.out.println("There was a problem starting the topology. Check parameters.");
      e.printStackTrace();
    }
  }

  private static KafkaSpout getKafkaSpout(String zkHosts, String topic) throws Exception {

    //String topic = "FLAT-ITEMS";
    String zkNode = "/" + topic + "-subscriber-pipeline";
    String zkSpoutId = topic + "subscriberpipeline";
    KafkaTopicInZkCreator.createTopic(topic, zkHosts);


    SpoutConfig spoutConfig = new SpoutConfig(new ZkHosts(zkHosts), topic, zkNode, zkSpoutId);
    spoutConfig.startOffsetTime = kafka.api.OffsetRequest.LatestTime();

    // spoutConfig.useStartOffsetTimeIfOffsetOutOfRange = true;
    //spoutConfig.startOffsetTime = System.currentTimeMillis();
    spoutConfig.scheme = new SchemeAsMultiScheme(new StringScheme());

    return new KafkaSpout(spoutConfig);

  }

这是在重要的情况下创建主题

  public static void createTopic(String topicName, String zookeeperHosts) throws Exception {
    ZkClient zkClient = null;
    ZkUtils zkUtils = null;
    try {

      int sessionTimeOutInMs = 15 * 1000; // 15 secs
      int connectionTimeOutInMs = 10 * 1000; // 10 secs

      zkClient = new ZkClient(zookeeperHosts, sessionTimeOutInMs, connectionTimeOutInMs, ZKStringSerializer$.MODULE$);
      zkUtils = new ZkUtils(zkClient, new ZkConnection(zookeeperHosts), false);

      int noOfPartitions = 1;
      int noOfReplication = 1;
      Properties topicConfiguration = new Properties();

      boolean topicExists = AdminUtils.topicExists(zkUtils, topicName);
      if (!topicExists) {
        AdminUtils.createTopic(zkUtils, topicName, noOfPartitions, noOfReplication, topicConfiguration, RackAwareMode.Disabled$.MODULE$);
      }
    } catch (Exception ex) {
      ex.printStackTrace();
    } finally {
      if (zkClient != null) {
        zkClient.close();
      }
    }
  }

1 个答案:

答案 0 :(得分:1)

您需要查看螺栓中的消息是否失败。

如果它们都失败了,你可能没有在螺栓中发出消息,或者螺栓代码中有异常。

如果发出螺栓消息,则更有可能是超时。增加拓扑超时配置或paralisim应解决问题。