如何从另一个应用程序启动它时正确等待apache spark启动器作业?

时间:2016-03-28 14:53:31

标签: java apache-spark spark-launcher

当我等到我的火花阿帕奇工作完成时,我试图避免“while(true)”解决方案,但没有成功。

我有一个spark应用程序,假设处理一些数据并将结果放到数据库中,我确实从我的spring服务中调用它,并希望等到作业完成。

示例:

启动器方法:

@Override
public void run(UUID docId, String query) throws Exception {
    launcher.addAppArgs(docId.toString(), query);

    SparkAppHandle sparkAppHandle = launcher.startApplication();

    sparkAppHandle.addListener(new SparkAppHandle.Listener() {
        @Override
        public void stateChanged(SparkAppHandle handle) {
            System.out.println(handle.getState() + " new  state");
        }

        @Override
        public void infoChanged(SparkAppHandle handle) {
            System.out.println(handle.getState() + " new  state");
        }
    });

    System.out.println(sparkAppHandle.getState().toString());
}

如何正确等待,直到处理程序的状态为“已完成”。

2 个答案:

答案 0 :(得分:3)

我也在Spring应用程序中使用SparkLauncher。以下是我采用的方法的摘要(通过以下JavaDoc中的示例)。

用于启动作业的@Service也实现了SparkHandle.Listener,并通过.startApplication传递对自身的引用,例如。

...
...
@Service
public class JobLauncher implements SparkAppHandle.Listener {
...
...
...
private SparkAppHandle launchJob(String mainClass, String[] args) throws Exception {

    String appResource = getAppResourceName();

    SparkAppHandle handle = new SparkLauncher()
        .setAppResource(appResource).addAppArgs(args)
        .setMainClass(mainClass)
        .setMaster(sparkMaster)
        .setDeployMode(sparkDeployMode)
        .setSparkHome(sparkHome)
        .setConf(SparkLauncher.DRIVER_MEMORY, "2g")
        .startApplication(this);

    LOG.info("Launched [" + mainClass + "] from [" + appResource + "] State [" + handle.getState() + "]");

    return handle;
}

/**
* Callback method for changes to the Spark Job
*/
@Override
public void infoChanged(SparkAppHandle handle) {

    LOG.info("Spark App Id [" + handle.getAppId() + "] Info Changed.  State [" + handle.getState() + "]");

}

/**
* Callback method for changes to the Spark Job's state
*/
@Override
public void stateChanged(SparkAppHandle handle) {

    LOG.info("Spark App Id [" + handle.getAppId() + "] State Changed. State [" + handle.getState() + "]");

}

使用这种方法,当状态变为" FAILED"," FINISHED"或者" KILLED"。

我希望这些信息对您有所帮助。

答案 1 :(得分:3)

我使用CountDownLatch实现,它按预期工作。

    ...
final CountDownLatch countDownLatch = new CountDownLatch(1);
SparkAppListener sparkAppListener = new SparkAppListener(countDownLatch);
SparkAppHandle appHandle = sparkLauncher.startApplication(sparkAppListener);
Thread sparkAppListenerThread = new Thread(sparkAppListener);
sparkAppListenerThread.start();
long timeout = 120;
countDownLatch.await(timeout, TimeUnit.SECONDS);    
    ...

private static class SparkAppListener implements SparkAppHandle.Listener, Runnable {
    private static final Log log = LogFactory.getLog(SparkAppListener.class);
    private final CountDownLatch countDownLatch;
    public SparkAppListener(CountDownLatch countDownLatch) {
        this.countDownLatch = countDownLatch;
    }
    @Override
    public void stateChanged(SparkAppHandle handle) {
        String sparkAppId = handle.getAppId();
        State appState = handle.getState();
        if (sparkAppId != null) {
            log.info("Spark job with app id: " + sparkAppId + ",\t State changed to: " + appState + " - "
                    + SPARK_STATE_MSG.get(appState));
        } else {
            log.info("Spark job's state changed to: " + appState + " - " + SPARK_STATE_MSG.get(appState));
        }
        if (appState != null && appState.isFinal()) {
            countDownLatch.countDown();
        }
    }
    @Override
    public void infoChanged(SparkAppHandle handle) {}
    @Override
    public void run() {}
}