hazelcast ScheduledExecutorService在节点关闭后丢失任务

时间:2017-05-15 14:36:55

标签: hazelcast

我正在尝试使用hazelcast ScheduledExecutorService来执行某些定期任务。我正在使用hazelcast 3.8.1。

我启动一个节点然后启动另一个节点,任务在两个节点之间分配并正确执行。

如果我关闭第一个节点,那么第二个节点将开始执行先前在第一个节点上的周期性任务。

问题是,如果我停止第二个节点而不是第一个节点,那么它的任务不会重新安排到第一个节点。即使我有更多节点,也会发生这种情况。如果我关闭最后一个节点来接收任务,那么这些任务就会丢失。

关闭始终使用ctrl + c

完成

我已经创建了一个测试应用程序,其中包含来自hazelcast示例的一些示例代码以及我在网络上找到的一些代码。我启动了这个应用程序的两个实例。

public class MasterMember {

/**
 * The constant LOG.
 */
final static Logger logger = LoggerFactory.getLogger(MasterMember.class);

public static void main(String[] args) throws Exception {

    Config config = new Config();
    config.setProperty("hazelcast.logging.type", "slf4j");
    config.getScheduledExecutorConfig("scheduler").
    setPoolSize(16).setCapacity(100).setDurability(1);

    final HazelcastInstance instance = Hazelcast.newHazelcastInstance(config);

    Runtime.getRuntime().addShutdownHook(new Thread() {

        HazelcastInstance threadInstance = instance;

        @Override
        public void run() {
            logger.info("Application shutdown");

            for (int i = 0; i < 12; i++) {
                logger.info("Verifying whether it is safe to close this instance");
                boolean isSafe = getResultsForAllInstances(hzi -> {
                    if (hzi.getLifecycleService().isRunning()) {
                        return hzi.getPartitionService().forceLocalMemberToBeSafe(10, TimeUnit.SECONDS);
                    }
                    return true;
                });

                if (isSafe) {
                    logger.info("Verifying whether cluster is safe.");
                    isSafe = getResultsForAllInstances(hzi -> {
                        if (hzi.getLifecycleService().isRunning()) {
                            return hzi.getPartitionService().isClusterSafe();
                        }
                        return true;
                    });

                    if (isSafe) {
                        System.out.println("is safe.");
                        break;
                    }
                }

                try {
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                }
            }

            threadInstance.shutdown();

        }

        private boolean getResultsForAllInstances(
                Function<HazelcastInstance, Boolean> hazelcastInstanceBooleanFunction) {

            return Hazelcast.getAllHazelcastInstances().stream().map(hazelcastInstanceBooleanFunction).reduce(true,
                    (old, next) -> old && next);
        }
    });

    new Thread(() -> {

        try {
            Thread.sleep(10000);
        } catch (InterruptedException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }

        IScheduledExecutorService scheduler = instance.getScheduledExecutorService("scheduler");
        scheduler.scheduleAtFixedRate(named("1", new EchoTask("1")), 5, 10, TimeUnit.SECONDS);
        scheduler.scheduleAtFixedRate(named("2", new EchoTask("2")), 5, 10, TimeUnit.SECONDS);
        scheduler.scheduleAtFixedRate(named("3", new EchoTask("3")), 5, 10, TimeUnit.SECONDS);
        scheduler.scheduleAtFixedRate(named("4", new EchoTask("4")), 5, 10, TimeUnit.SECONDS);
        scheduler.scheduleAtFixedRate(named("5", new EchoTask("5")), 5, 10, TimeUnit.SECONDS);
        scheduler.scheduleAtFixedRate(named("6", new EchoTask("6")), 5, 10, TimeUnit.SECONDS);
    }).start();

    new Thread(() -> {

        try {
            // delays init
            Thread.sleep(20000);

            while (true) {

                IScheduledExecutorService scheduler = instance.getScheduledExecutorService("scheduler");
                final Map<Member, List<IScheduledFuture<Object>>> allScheduledFutures =
                        scheduler.getAllScheduledFutures();

                // check if the subscription already exists as a task, if so, stop it
                for (final List<IScheduledFuture<Object>> entry : allScheduledFutures.values()) {
                    for (final IScheduledFuture<Object> objectIScheduledFuture : entry) {
                        logger.info(
                                "TaskStats: name {} isDone() {} isCanceled() {} total runs {} delay (sec) {} other statistics {} ",
                                objectIScheduledFuture.getHandler().getTaskName(), objectIScheduledFuture.isDone(),
                                objectIScheduledFuture.isCancelled(),
                                objectIScheduledFuture.getStats().getTotalRuns(),
                                objectIScheduledFuture.getDelay(TimeUnit.SECONDS),
                                objectIScheduledFuture.getStats());
                    }
                }

                Thread.sleep(15000);

            }

        } catch (InterruptedException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }

    }).start();

    while (true) {
        Thread.sleep(1000);
    }
    // Hazelcast.shutdownAll();
}
}

任务

public class EchoTask implements Runnable, Serializable {

/**
 * serialVersionUID
 */
private static final long serialVersionUID = 5505122140975508363L;

final Logger logger = LoggerFactory.getLogger(EchoTask.class);

private final String msg;

public EchoTask(String msg) {
    this.msg = msg;
}

@Override
public void run() {
    logger.info("--> " + msg);
}
}

我做错了什么?

提前致谢

- 编辑 -

修改(并在上面更新)使用log而不是system.out的代码。添加了任务统计信息的记录和Config对象的固定使用。

日志:

Node1_log

Node2_log

忘记提及我等到所有任务在第一个节点中运行之后再开始第二个节点。

2 个答案:

答案 0 :(得分:0)

通过更改hazelcast项目的ScheduledExecutorContainer类(使用3.8.1源代码),即promoteStash()方法,我能够快速解决此问题。基本上,如果我们的任务在之前的数据迁移中被取消,我就添加了一个条件。 我现在不知道这种变化可能产生的副作用,或者这是最好的方法!

 void promoteStash() {
    for (ScheduledTaskDescriptor descriptor : tasks.values()) {
        try {
            if (logger.isFinestEnabled()) {
                logger.finest("[Partition: " + partitionId + "] " + "Attempt to promote stashed " + descriptor);
            }

            if (descriptor.shouldSchedule()) {
                doSchedule(descriptor);
            } else if (descriptor.getTaskResult() != null && descriptor.getTaskResult().isCancelled()
                    && descriptor.getScheduledFuture() == null) {
                // tasks that were already present in this node, once they get sent back to this node, since they
                // have been cancelled when migrating the task to other node, are not rescheduled...
                logger.fine("[Partition: " + partitionId + "] " + "Attempt to promote stashed canceled task "
                        + descriptor);

                descriptor.setTaskResult(null);
                doSchedule(descriptor);
            }

            descriptor.setTaskOwner(true);
        } catch (Exception e) {
            throw rethrow(e);
        }
    }
}

答案 1 :(得分:0)

布鲁诺,谢谢你的报道,这真的是一个错误。不幸的是,多个节点并不是那么明显,只有两个节点。正如您在答案中所认为的那样,它不会丢失任务,而是在迁移后将其取消。但是,您的修复不安全,因为可以取消任务并同时具有null Future,例如。当您取消主副本时,从未拥有过未来的备份只会得到结果。该修复程序与您所做的非常接近,因此在migrationModehttps?:\/\/(?:[^\/]+\/){4}(.*) 时,我们会避免设置结果。我将很快推出一个修复程序,只需再运行一些测试。这将在主版和更高版本中提供。

我记录了您的发现问题,如果您不介意,https://github.com/hazelcast/hazelcast/issues/10603您可以在那里跟踪其状态。