我刚刚在Python 3和Composer映像版本composer-1.4.0-airflow-1.10.0
上设置了Cloud Composer环境。否则所有设置均为“ stock”;即没有配置覆盖。
我正在尝试测试一个非常简单的DAG。它在我的本地Airflow服务器上运行没有问题,但是在Cloud Composer上,Web服务器的任务信息视图显示了消息Dependencies Blocking Task From Getting Scheduled
依赖性为Unknown
,其原因如下:
All dependencies are met but the task instance is not running. In most cases this just means that the task will probably be scheduled soon unless:
- The scheduler is down or under heavy load
- The following configuration values may be limiting the number of queueable processes: parallelism, dag_concurrency, max_active_dag_runs_per_dag, non_pooled_task_slot_count
If this task instance does not start soon please contact your Airflow administrator for assistance.
无论任务是按计划运行,还是在Web服务器中手动触发它时,都会发生这种情况(在执行此操作之前,我将所有任务实例都设置为成功,以避免延迟)。我已经尝试过resetting the scheduler in kubernetes as per this answer,但是任务仍然按计划进行。
此外,我注意到在我的本地实例(在不同的Docker容器上运行服务器,工作程序和调度程序)上,填充了“任务实例”视图中的Hostname
列,但是在Cloud Composer上,它是“
这是我正在运行的DAG:
from datetime import datetime, timedelta
import random
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email_on_failure': False,
'email_on_retry': False,
'queue': 'airflow',
'start_date': datetime.today() - timedelta(days=2),
'schedule_interval': None,
'retries': 2,
'retry_delay': timedelta(seconds=15),
'priority_weight': 10,
}
example_dag = DAG(
'example_dag',
default_args=default_args,
schedule_interval=timedelta(days=1)
)
def always_succeed():
pass
always_succeed_operator = PythonOperator(
dag=example_dag,
python_callable=always_succeed,
task_id='always_succeed'
)
def might_fail():
return 1 / random.randint(0, 1)
might_fail_operator = PythonOperator(
dag=example_dag, python_callable=might_fail, task_id='might_fail'
)
might_fail_operator.set_upstream(always_succeed_operator)
答案 0 :(得分:0)
Cloud Composer不支持多个芹菜队列,请从默认参数中删除'queue' : 'airflow'
。那应该解决您的问题。