Does apache storm do resource management job in the cluster by itslef?

时间:2015-10-06 08:14:10

标签: hadoop hdfs yarn apache-storm

Well I am new to Apache Storm and after some search and read tutorials, I didn't get that how fault tolerance, load balancing and other resource manager duties takes place in Storm cluster? Should it be configured on top of YARN or it doest the resource management job itself? Does it have its HDFS part, or there should be an existing HDFS configured in a cluster first?

1 个答案:

答案 0 :(得分:4)

Storm可以自行管理其资源可以在YARN之上运行。如果您有一个共享集群(即,与其他系统一样运行Hadoop,Spark或Flink),使用YARN应该是避免资源冲突的更好选择。

关于HDFS:Storm独立于HDFS。如果要在HDFS之上运行,则需要自己设置HDFS。此外,Storm还提供Spouts / Bolt访问HDFS:https://storm.apache.org/documentation/storm-hdfs.html