风暴字数拓扑 - 与执行数量的概念问题

时间:2015-06-09 17:25:14

标签: java apache-storm word-count

下午好,我正在关注Storm-starter WordCountTopology here。作为参考,这里是Java文件。

这是主文件:

public class WordCountTopology {
public static class SplitSentence extends ShellBolt implements IRichBolt {

public SplitSentence() {
  super("python", "splitsentence.py");
}

@Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
  declarer.declare(new Fields("word"));
}

@Override
public Map<String, Object> getComponentConfiguration() {
  return null;
}
}

public static class WordCount extends BaseBasicBolt {
Map<String, Integer> counts = new HashMap<String, Integer>();

@Override
public void execute(Tuple tuple, BasicOutputCollector collector) {
  String word = tuple.getString(0);
  Integer count = counts.get(word);
  if (count == null)
    count = 0;
  count++;
  counts.put(word, count);
  collector.emit(new Values(word, count));
}

@Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
  declarer.declare(new Fields("word", "count"));
}
}

public static void main(String[] args) throws Exception {

TopologyBuilder builder = new TopologyBuilder();

builder.setSpout("spout", new TextFileSpout(), 5);

builder.setBolt("split", new SplitSentence(), 8).shuffleGrouping("spout");
builder.setBolt("count", new WordCount(), 12).fieldsGrouping("split", new Fields("word"));

Config conf = new Config();
conf.setDebug(true);

if (args != null && args.length > 0) {
  conf.setNumWorkers(3);

  StormSubmitter.submitTopology(args[0], conf, builder.createTopology());
}
else {
  conf.setMaxTaskParallelism(3);
  LocalCluster cluster = new LocalCluster();
  cluster.submitTopology("word-count", conf, builder.createTopology());
  Thread.sleep(10000);
  cluster.shutdown();
}
}
}

我不想从随机的String []中读取,而是只想从一个句子中读取一个:

public class TextFileSpout extends BaseRichSpout {
    SpoutOutputCollector _collector;
    String sentence = "";
    String line = "";
    String splitBy = ",";
    BufferedReader br = null;

    @Override
    public void open(Map conf, TopologyContext context,
            SpoutOutputCollector collector) {
        _collector = collector;

    }

    @Override
    public void nextTuple() {
        Utils.sleep(100);
        sentence = "wordOne wordTwo";
        _collector.emit(new Values(sentence));
        System.out.println(sentence);
    }

    @Override
    public void ack(Object id) {
    }

    @Override
    public void fail(Object id) {
    }

    @Override
    public void declareOutputFields(OutputFieldsDeclarer declarer) {
        declarer.declare(new Fields("word"));
    }

}

这段代码运行,输出很多线程/发出。问题是程序执行重复读取一个句子85次而不是一次。我猜这是因为原始代码执行了多次新的随机句子。

什么导致NextTuple被多次调用?

1 个答案:

答案 0 :(得分:0)

您应该使用open方法移动文件初始化代码,否则每次调用nextTuple时都会初始化文件处理程序。

编辑:

在open方法中,执行类似

的操作
    br = new BufferedReader(new FileReader(csvFileToRead));

然后读取文件的逻辑应该在nextTuple方法

     while ((line = br.readLine()) != null) {
         // your logic
     }