如何在Kubernetes集群上最好地运行Apache Airflow任务?

时间:2018-06-19 10:40:50

标签: kubernetes airflow job-scheduling

我们想要实现的目标:

我们希望使用Airflow来管理我们的机器学习和数据管道,同时使用Kubernetes来管理资源和安排作业。我们希望实现的是Airflow协调工作流程(例如,各种任务依赖性。失败时重新运行作业)和Kubernetes协调基础结构(例如集群自动调度和单个作业分配到节点)。换句话说,Airflow将告诉Kubernetes集群做什么,Kubernetes决定如何分配工作。与此同时,我们还希望Airflow能够监控各个任务的状态。例如,如果我们在5个节点的集群中传播10个任务,Airflow应该能够与集群通信,并且报告显示如下:3个“小任务”完成,1个“小任务”失败并将被安排到重新运行,其余6个“大任务”仍在运行。

问题:

我们的理解是Airflow没有Kubernetes-Operator,请参阅https://issues.apache.org/jira/browse/AIRFLOW-1314的未解决问题。话虽如此,我们不希望Airflow管理资源,如管理服务帐户,env变量,创建集群等,而只需将任务发送到现有的Kubernetes集群,让Airflow知道何时完成作业。另一种方法是使用Apache Mesos,但与Kubernetes相比,它看起来不那么灵活,也不那么简单。

我想我们可以使用Airflow的bash_operator来运行kubectl,但这似乎不是最优雅的解决方案。

有什么想法?你是如何处理的?

1 个答案:

答案 0 :(得分:5)

Airflow has both a Kubernetes Executor as well as a Kubernetes Operator.

You can use the Kubernetes Operator to send tasks (in the form of Docker images) from Airflow to Kubernetes via whichever AirflowExecutor you prefer.

Based on your description though, I believe you are looking for the KubernetesExecutor to schedule all your tasks against your Kubernetes cluster. As you can see from the source code it has a much tighter integration with Kubernetes.

This will also allow you to not have to worry about creating the docker images ahead of time as is required with the Kubernetes Operator.