为什么httpcomponents在第一次处理元组后会减慢我的拓扑?

时间:2016-12-08 11:06:50

标签: java apache-kafka apache-storm apache-httpcomponents

我构建了一个Storm-topology,它从Apache-Kafka通过kafka-spout接收元组,将这些数据(使用另一个bolt)作为String写入我本地系统上的.txt文件并发送一个之后来自PostBolt的httpPost。

两个螺栓都连接到Kafka-Spout。

如果我在没有PostBolt的情况下测试拓扑,一切正常。但是如果我将螺栓添加到拓扑中,整个拓扑会因某种原因而被阻止。

有没有人遇到过同样的问题,或者对我有什么暗示,是什么原因引起的?

我已经读到有一些问题是CloseableHttpClient或CloseableHttpResponse阻止线程工作......在这种情况下可能是同样的问题吗?

  

我的PostBolt代码:

public class PostBolt extends BaseRichBolt {

private CloseableHttpClient httpclient; 

@Override
public final void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
    //empty for now
}

@Override
public final void execute(Tuple tuple) {

    //create HttpClient:
    httpclient = HttpClients.createDefault();
    String url = "http://xxx.xxx.xx.xxx:8080/HTTPServlet/httpservlet";
    HttpPost post = new HttpPost(url);

    post.setHeader("str1", "TEST TEST TEST");

    try {
        CloseableHttpResponse postResponse;
        postResponse = httpclient.execute(post);
        System.out.println(postResponse.getStatusLine());
        System.out.println("=====sending POST=====");
        HttpEntity postEntity = postResponse.getEntity();
        //do something useful with the response body
        //and ensure that it is fully consumed
        EntityUtils.consume(postEntity);
        postResponse.close();
    }catch (Exception e){
         e.printStackTrace();
    }
}

@Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
    declarer.declare(new Fields("HttpPost"));
}}
  

我的拓扑代码:

public static void main(String[] args) throws Exception {

    /**
    *   create a config for Kafka-Spout (and Kafka-Bolt)
    */
    Config config = new Config();
    config.setDebug(true);
    config.put(Config.TOPOLOGY_MAX_SPOUT_PENDING, 1);
    //setup zookeeper connection
    String zkConnString = "localhost:2181";
    //define Kafka topic for the spout
    String topic = "mytopic";
    //assign the zookeeper connection to brokerhosts
    BrokerHosts hosts = new ZkHosts(zkConnString);

    //setting up spout properties
    SpoutConfig kafkaSpoutConfig = new SpoutConfig(hosts, topic, "/" +topic, UUID.randomUUID().toString());
    kafkaSpoutConfig.bufferSizeBytes = 1024 * 1024 * 4;
    kafkaSpoutConfig.fetchSizeBytes = 1024 * 1024 * 4;
    kafkaSpoutConfig.scheme = new SchemeAsMultiScheme(new StringScheme());

    /**
    *   Build the Topology by linking the spout and bolts together
    */
    TopologyBuilder builder = new TopologyBuilder();
    builder.setSpout("kafka-spout", new KafkaSpout(kafkaSpoutConfig));
    builder.setBolt("printer-bolt", new PrinterBolt()).shuffleGrouping("kafka-spout");
    builder.setBolt("post-bolt", new PostBolt()).shuffleGrouping("kafka-spout");

    /**
    *   Check if we're running locally or on a real cluster
    */
    if (args != null && args.length >0) {
        config.setNumWorkers(6);
        config.setNumAckers(6);
        config.setMaxSpoutPending(100);
        config.setMessageTimeoutSecs(20);
        StormSubmitter.submitTopology("StormKafkaTopology", config, builder.createTopology());
    } else {
        config.setMaxTaskParallelism(3);
        config.setNumWorkers(6);
        LocalCluster cluster = new LocalCluster();
        cluster.submitTopology("StormKafkaTopology", config, builder.createTopology());
        //Utils.sleep(100000);
        //cluster.killTopology("StormKafkaTopology");
        //cluster.shutdown();
    }
}}

2 个答案:

答案 0 :(得分:1)

在我看来你已经回答了你的问题,但是......根据this answer你应该使用PoolingHttpClientConnectionManager,因为你将在多线程环境中运行。

编辑:

public class PostBolt extends BaseRichBolt {
    private static Logger LOG = LoggerFactory.getLogger(PostBolt.class);
    private CloseableHttpClient httpclient;
    private OutputCollector _collector;        

    @Override
    public final void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
        httpclient = HttpClients.createDefault();
        _collector = collector;
    }

    @Override
    public final void execute(Tuple tuple) {
        String url = "http://xxx.xxx.xx.xxx:8080/HTTPServlet/httpservlet";
        HttpPost post = new HttpPost(url);
        post.setHeader("str1", "TEST TEST TEST");

        CloseableHttpResponse postResponse = httpclient.execute(post);
        try {
            LOG.info(postResponse.getStatusLine());
            LOG.info("=====sending POST=====");
            HttpEntity postEntity = postResponse.getEntity();
            //do something useful with the response body
            //and ensure that it is fully consumed
            EntityUtils.consume(postEntity);
            postResponse.close();
        }catch (Exception e){
           LOG.error("SolrIndexerBolt prepare error", e);
           _collector.reportError(e);
        } finally {
           postResponse.close()
        }

    }

    @Override
    public void declareOutputFields(OutputFieldsDeclarer declarer) {
        declarer.declare(new Fields("HttpPost"));
    }

}

答案 1 :(得分:0)

好吧,我根据此评论https://stackoverflow.com/a/32080845/7208987

确定了问题

Kafka Spout将继续重新发送元组,而这些元组并未受到端点"的影响。他们被送到了。

所以我只需要在螺栓内部识别传入的元组,并且拓扑的hickup消失了。

(我发现了问题,因为打印机螺栓确实继续写入,即使没有来自kafkaspout的进一步输入)。