气流任务无任何问题,突然卡住了一半,任务实例详细信息如上所示。
我清除了我的整个数据库,但是仍然出现相同的错误。
事实是,我只遇到一些问题就遇到了这个问题。通常是长时间运行的工作。
我遇到错误了
[2019-07-03 12:14:56,337] {{models.py:1353}} INFO - Dependencies not met for <TaskInstance: XXXXXX.index_to_es 2019-07-01T13:30:00+00:00 [running]>, dependency 'Task Instance State' FAILED: Task is in the 'running' state which is not a valid state for execution. The task must be cleared in order to be run.
[2019-07-03 12:14:56,341] {{models.py:1353}} INFO - Dependencies not met for <TaskInstance: XXXXXX.index_to_es 2019-07-01T13:30:00+00:00 [running]>, dependency 'Task Instance Not Already Running' FAILED: Task is already running, it started on 2019-07-03 05:58:51.601552+00:00.
[2019-07-03 12:14:56,342] {{logging_mixin.py:95}} INFO - [2019-07-03 12:14:56,342] {{jobs.py:2514}} INFO - Task is not able to be run
我的dag如下图
default_args = {
'owner': 'datascience',
'depends_on_past': True,
'start_date': datetime(2019, 6, 12),
'email': ['datascience@mycompany.com'],
'email_on_failure': True,
'email_on_retry': True,
'retries': 3,
'retry_delay': timedelta(minutes=5),
# 'queue': 'nill',
# 'pool': 'backfill',
# 'priority_weight': 10,
# 'end_date': datetime(2016, 1, 1),
}
def get_index_date(**kwargs):
tomorrow=kwargs.get('templates_dict').get('tomorrow')
return str(tomorrow).replace('-','.')
"""
Create Dags specify its features
"""
dag = DAG(
DAG_NAME,
schedule_interval="0 9 * * *",
catchup=True,
default_args=default_args,
template_searchpath='/efs/sql')
create_table = BigQueryOperator(
dag=dag,
task_id='create_temp_table_from_query',
sql='daily_demand.sql',
use_legacy_sql=False,
destination_dataset_table=TEMP_TABLE,
bigquery_conn_id=CONNECTION_ID,
create_disposition='CREATE_IF_NEEDED',
write_disposition='WRITE_TRUNCATE'
)
"""Task to zip and export to GCS"""
export_to_storage = BigQueryToCloudStorageOperator(
task_id='export_to_GCS',
source_project_dataset_table=TEMP_TABLE,
destination_cloud_storage_uris=[CLOUD_STORAGE_URI],
export_format='NEWLINE_DELIMITED_JSON',
compression='GZIP',
bigquery_conn_id=CONNECTION_ID,
dag=dag)
"""Task to get the tomorrow execution date formatted for indexing"""
get_index_date = PythonOperator(
task_id='get_index_date',
python_callable=get_index_date,
templates_dict={'tomorrow':"{{ tomorrow_ds }}"},
provide_context=True,
dag=dag
)
"""Task to download zipped files and bulkindex to elasticsearch"""
es_indexing = EsDownloadAndIndexOperator(
task_id="index_to_es",
object=OBJECT,
es_url=ES_URI,
local_path=LOCAL_FILE,
gcs_conn_id=CONNECTION_ID,
bucket=GCS_BUCKET_ID,
es_index_type='demand_shopper',
es_bulk_batch=5000,
es_index_name=INDEX,
es_request_timeout=300,
dag=dag)
"""Define the chronology of tasks in DAG"""
create_table >> export_to_storage >> get_index_date >> es_indexing
感谢您的帮助
答案 0 :(得分:0)
我发现了问题所在,这是基础架构问题。我使用的是AWS EFS,并且突发模式在达到吞吐量时阻塞了工作进程。更改为置备模式后,工作人员不再处于卡住状态。 我有这个主意 ecs-airflow-1-10-2-performance-issues-operators-and-tasks-take-10x-longer