筛选数据螺栓风暴

时间:2020-01-18 08:10:14

标签: filter apache-storm topology

我有一个简单的Storm拓扑,该拓扑从Kafka读取数据,解析并提取消息字段。我想通过一个字段值过滤元组流,并在另一个字段上执行计数聚合。如何在Storm中做到这一点? 我没有找到元组的各个方法(过滤器,聚合),所以我应该直接在字段值上执行这些功能吗?

这是一个拓扑:

topologyBuilder.setSpout("kafka_spout", new KafkaSpout(spoutConfig), 1)
topologyBuilder.setBolt("parser_bolt", new ParserBolt()).shuffleGrouping("kafka_spout")
topologyBuilder.setBolt("transformer_bolt", new KafkaTwitterBolt()).shuffleGrouping("parser_bolt")

val config = new Config()
cluster.submitTopology("kafkaTest", config, topologyBuilder.createTopology())

我已经设置了KafkaTwitterBolt来对已解析的字段进行计数和过滤。我设法不按特定字段过滤整个值列表:

class KafkaTwitterBolt() extends BaseBasicBolt{

 override def execute(input: Tuple, collector: BasicOutputCollector): Unit = {
  val tweetValues = input.getValues.asScala.toList
  val filterTweets = tweetValues
     .map(_.toString)
     .filter(_ contains "big data")
  val resultAllValues = new Values(filterTweets)
  collector.emit(resultAllValues)
 }

 override def declareOutputFields(declarer: OutputFieldsDeclarer): Unit = {
  declarer.declare(new Fields("created_at", "id", "text", "source", "timestamp_ms",
   "user.id", "user.name", "user.location", "user.url", "user.description", "user.followers_count",
   "user.friends_count", "user.lang", "user.favorite_count", "entities.hashtags"))
 }
}

2 个答案:

答案 0 :(得分:0)

结果证明,Storm核心API不允许这样做,为了对任何字段执行过滤,都应使用Trident(它具有内置的过滤器功能)。 代码如下:

 val tridentTopology = new TridentTopology()

    val stream = tridentTopology.newStream("kafka_spout",
      new KafkaTridentSpoutOpaque(spoutConfig))
      .map(new ParserMapFunction, new Fields("created_at", "id", "text", "source", "timestamp_ms",
        "user.id", "user.name", "user.location", "user.url", "user.description", "user.followers_count",
        "user.friends_count", "user.favorite_count", "user.lang", "entities.hashtags"))
    .filter(new LanguageFilter)

过滤功能本身:

class LanguageFilter extends BaseFilter{

  override def isKeep(tuple: TridentTuple): Boolean = {
    val language = tuple.getStringByField("user.lang")
    println(s"TWEET: $language")
    language.contains("en")
  }
}

答案 1 :(得分:0)

您在https://stackoverflow.com/a/59805582/8845188的答案有误。 Storm核心API确实允许过滤和聚合,您只需要自己编写逻辑即可。

过滤螺栓只是丢弃一些元组并传递其他元组的螺栓。例如,以下螺栓将基于字符串字段过滤出元组:

class FilteringBolt() extends BaseBasicBolt{

 override def execute(input: Tuple, collector: BasicOutputCollector): Unit = {
  val values = input.getValues.asScala.toList
  if ("Pass me".equals(values.get(0))) {
    collector.emit(values)
  }
  //Emitting nothing means discarding the tuple
 }

 override def declareOutputFields(declarer: OutputFieldsDeclarer): Unit = {
  declarer.declare(new Fields("some-field"))
 }
}

聚集螺栓只是一个收集多个元组,然后发出锚定在原始元组中的新聚集元组的螺栓:

class AggregatingBolt extends BaseRichBolt {
  List<Tuple> tuplesToAggregate = ...;
  int counter = 0;

 override def execute(input: Tuple): Unit = {
  tuplesToAggregate.add(input);
  counter++;
  if (counter == 10) {
    Values aggregateTuple = ... //create a new set of values based on tuplesToAggregate
    collector.emit(tuplesToAggregate, aggregateTuple) //This anchors the new aggregate tuple to all the original tuples, so if the aggregate fails, the original tuples are replayed.
    for (Tuple t : tuplesToAggregate) {
      collector.ack(t); //Ack the original tuples now that this bolt is done with them
      //Note that you MUST emit before you ack, or the at-least-once guarantee will be broken.
    }
    tuplesToAggregate.clear();
    counter = 0;
  }
  //Note that we don't ack the input tuples until the aggregate gets emitted. This lets us replay all the aggregated tuples in case the aggregate fails
 }
}

请注意,对于聚合,您将需要扩展BaseRichBolt并手动进行确认,因为要延迟对元组的确认,直到将其包含在聚合元组中为止。

相关问题