xcomm从subdag到dag在气流作曲家中

时间:2019-09-10 02:37:24

标签: python-3.x airflow google-cloud-composer apache-airflow-xcom

在我开始之前,我道歉,因为以前已经问过这种类型的问题,但是我仍然很难理解下面的情况。

我现在开始专业从事气流和python编码工作了一个月,所以请忽略编写糟糕的python函数,但它基本上需要一个字符串文件名,并返回一个可用于增量的字符串值。

步骤:我想将所有文件放入ABC前缀的存储桶中并进行遍历。

方法: 下面是代码

#!/usr/bin/env python

"""

"""
from datetime import datetime, timedelta
from airflow import DAG
from airflow.contrib.operators.gcs_to_bq import GoogleCloudStorageToBigQueryOperator
from airflow.contrib.operators.bigquery_operator import BigQueryOperator
from airflow.operators.bash_operator import BashOperator
from airflow.contrib.operators.gcs_to_bq import GoogleCloudStorageToBigQueryOperator
from airflow.contrib.operators.gcs_list_operator import GoogleCloudStorageListOperator
from airflow.utils.trigger_rule import TriggerRule
from airflow.operators import PythonOperator
#from airflow.contrib.hooks import gcs_hook
from airflow.contrib.sensors.gcs_sensor import GoogleCloudStoragePrefixSensor
from airflow.contrib.operators.bigquery_table_delete_operator import BigQueryTableDeleteOperator
from airflow.contrib.operators.bigquery_to_gcs import BigQueryToCloudStorageOperator
from airflow.contrib.operators.bigquery_operator import BigQueryCreateEmptyTableOperator
#import GenDeltaDate
from datetime import datetime
#from airflow.operators import InvalidDataFilterOperator

YESTERDAY = datetime.combine(
    datetime.today() - timedelta(days=1), datetime.min.time())
BQ_DATASET_NAME = 'Master'
CURRENT_TIME = datetime


default_args = {
    'owner': 'airflow',
    'depends_on_past': False,
    'start_date': YESTERDAY,
    #'email': [],
    'email_on_failure': False,
    'email_on_retry': False,
    'retries': 0,
    'provide_context': True,
    'dataflow_default_options': {
        'project': 'project',
        'zone': 'us-east1-f'
    }
}
files_to_process = ['abc']
bucket = 'bucket_name'


def pull(**context):
            archive(context['ti'].xcom_pull(task_ids='list_files'))

import re
def gen_delta_date(input_file,**kwargs):
    # stepl1: check for file extension and remove it
    idx_extension           = input_file.find(".")
    input_file_name         = input_file[:idx_extension]
    #check for 3 pairs of numeric values sperated by underscores and grab that value.
    find_date_time_part     = re.findall("_(\d*?_\d*?_\d*)",input_file_name)
    #massaging the value by removing unneeded char's
    find_date_time_part     = str(find_date_time_part).split('_', 1)[-1].strip(']')
    find_date_time_part     = str(find_date_time_part)
    find_date_time_part     = re.sub("'",'', find_date_time_part)
    find_date_time_part_len = len(find_date_time_part)
    '''
    to-do:
    1. need to remove hard coded length value and pass as a parameter.

    '''
    if find_date_time_part_len == 15:
        #Splitting the transformed input file name based on _ and save it into a list
        x = [a for a in find_date_time_part.split('_') if a]
        #get the date time part from the list i.e split at underscore
        x = (' '.join(x[-2:]))
        #print(x)
        #Using strptime to parse the string value as datetime object here our date format is YYYYMMDD hhmiss
        dt_obj = datetime.strptime(x, "%Y%m%d %H%M%S")
        # use strftime to format the date object into desired format in our case YYYY-MM-DD hh:mi:ss
        final_date_formatted = dt_obj.strftime("%Y-%m-%d %H:%M:%S")
        #print(type(find_date_time_part))
        return final_date_formatted
    else:
        print("Error: Input filename does not match the naming conventions:The input file naming format shoud be *xx_YYYYMMDD_hhmiss for proper parsing xx is numeric value here {0}_{1}".format(find_date_time_part_len,input_file))

with DAG('Test', default_args=default_args,
    schedule_interval=None,
) as dag:
    for item in files_to_process:
    #########################################################################
    #########################################################################
    ##############List the files in the bucket###############################
    #########################################################################
    #########################################################################
        GCS_File_list = GoogleCloudStorageListOperator(
                    task_id= 'list_files',
                    bucket= bucket,
                    prefix='ABC',
                    delimiter='.csv',
                    google_cloud_storage_conn_id='google_cloud_default',
                    #provide_context = True,
                    dag = dag
                )

        for  idx, file in enumerate(["{{ ti.xcom_pull(task_ids='list_files') }}"]):
            #print(idx)
            #print(file)
            Python_Task = PythonOperator(
                             task_id=item+'_pass_date',
                             provide_context=True,
                             python_callable=gen_delta_date,
                             op_kwargs={'input_file':file},
                             trigger_rule=TriggerRule.ALL_SUCCESS,
                             #provide_context = True,
                             #xcom_push=True,
                             dag=dag
                            )
            sql_task = BigQueryOperator(
                       task_id='query',
                       sql='test.sql',
                       destination_dataset_table='{0}.list_test'.format(BQ_DATASET_NAME),
                       bigquery_conn_id='bigquery_default',
                       use_legacy_sql=False,
                       trigger_rule=TriggerRule.ALL_SUCCESS,
                       provide_context=True,
                       create_disposition = 'CREATE_IF_NEEDED',
                       write_disposition = 'WRITE_APPEND'
                      )
#Orchestration.
GCS_File_list >> Python_Task >> sql_task

但是在检查之后,我看到传递给python函数的文件名没有被模板化,而是作为字符串xcom.pull传递的。

在进行了一些研究之后发现完全相同的代码,以及指定为何无法正常工作的原因。 链接:[Airflow unable to iterate through xcom_pull list with GoogleCloud Operatos

在上面的文章中,提到了使用subdags并实现功能,但是说如果我将任务GCS_File_list作为subdag,如何将值作为列表返回到主dag中,然后使用列表文件,然后我可以遍历文件以运行Python_Task和sql_task。

据我所知,我必须在运算符中使用“ {{ti.xcom_pull(task_ids ='list_files')}}}”,而不是我在上面的代码中所做的(对于idx,枚举文件([ “ {{ti.xcom_pull(task_ids ='list_files')}}”]))然后如何将值存储为列表。

任何指针或建议,不胜感激。

谢谢。

1 个答案:

答案 0 :(得分:0)

嗨,我使用了一种完全不同的方法来解决这一问题 loop over airflow variables issue question

致谢。