气流-脚本更改文件名变量

时间:2019-06-24 14:51:42

标签: python airflow

我在气流中创建了一个流程,我需要每10分钟从SQL Server数据库中导出一个新文件并播放到BigQuery!生成的文件是一个csv,该文件自动包含处理日期为YYYYMMDDHHMMSS格式的文件名。

当我从步骤1(导出)转到步骤2(插入BigQuery)时,气流中继再次每个脚本都会更改文件名变量名,并且处理日期与步骤1不同!

示例: 步骤1:test_20190624113656.csv 步骤2:test_20190624113705.csv

在这种情况下,我想在第一步中保留文件名。

nm_arquivo = 'test_' + datetime.today().strftime('%Y%m%d%H%M%S') + '.csv'

def insert_bigquery(ds, **kwargs):
    bigquery_client = bigquery.Client(project="project_name")
    dataset_ref = bigquery_client.dataset('test_dataset')
    job_config = bigquery.LoadJobConfig()
    job_config.schema = [
        bigquery.SchemaField('id','INTEGER',mode='REQUIRED'),
        bigquery.SchemaField('sigla','STRING',mode='REQUIRED'),
        bigquery.SchemaField('nome_en','STRING',mode='REQUIRED'),
        bigquery.SchemaField('nome_pt','STRING',mode='REQUIRED'),
    ]   
    job_config.source_format = bigquery.SourceFormat.CSV
    time_partitioning = bigquery.table.TimePartitioning()
    job_config.time_partitioning = time_partitioning
    job_config.clustering_fields = ["id", "sigla"]
    uri = "gs://bucket_name/"+nm_arquivo
    load_job = bigquery_client.load_table_from_uri(
        uri,
        dataset_ref.table('bdb'),
        job_config=job_config
        )
    print('Starting job {}'.format(load_job.job_id))
    load_job.result()
    print('Job finished.')

#step1      
import_orders_op = MsSqlToGoogleCloudStorageOperator(
    task_id='import_orders',
    mssql_conn_id='mssql_conn',
    google_cloud_storage_conn_id='gcp_conn',
    sql="""select * from bdb""",
    bucket='bucket_name',
    filename=nm_arquivo,
    dag=dag) 

#step2
run_this = PythonOperator(
    task_id='insert_bigquery',
    provide_context=True,
    python_callable=insert_bigquery,
    dag=dag,
)

run_this.set_upstream(import_orders_op)

2 个答案:

答案 0 :(得分:2)

您应该使用DAG的执行时间。

您可以使用{{ ts_nodash }} Airflow宏。它格式化execution_date.isoformat()(例如:2018-01-01T00:00:00+00:00)以删除-:,例如:20180101T000000。可以在任何模板化参数中使用此宏。

有关更多信息和所有其他可用变量的列表:

答案 1 :(得分:0)

您可以使用文件存储文件名:

import pickle

nm_arquivo = 'test_' + datetime.today().strftime('%Y%m%d%H%M%S') + '.csv'

#step 1
with open('filename.pickle', 'wb') as handle:
    pickle.dump(nm_arquivo, handle)

#step 2
with open('filename.pickle', 'rb') as handle:
    nm_arquivo = pickle.load(handle)