ElasticSearch and Spark. When I lose one ES node of my cluster, it stop working

时间:2016-04-25 08:56:30

标签: elasticsearch apache-spark

I have a ElasticSearch cluster with 5 nodes and Spark Cluster. My index has 5 shards and 1 replica. It works fine until I lose one ES node, then it stops working.

Usually, I index about 30K/sec. When I lose one node I get 500/sec. I thought it could be because the shards are been replicated but I waited until all shards were okay and I have the same behavior. I tried to restart the 4 elasticsearch nodes which left, but it doesn't fix anything.

If I have a ES cluster with 3 nodes from the beginning it works fine as well. So, it's not a problem about performance and don't have enough nodes.

I'm using Spark 1.6, ES 2.2.0 and the library es-hadoop.

0 个答案:

没有答案