ModuleNotFoundError:没有名为“气流”的模块

时间:2020-08-11 03:47:26

标签: python google-cloud-platform airflow google-cloud-dataflow google-cloud-composer

我正在使用Airflow PythonOperator 通过Dataflow运行器执行python Beam作业。 数据流作业返回错误"ModuleNotFoundError: No module named 'airflow'"

在DataFlow UI中,使用PythonOperator调用作业时使用的SDK版本为2.15.0。如果 作业是从Cloud Shell执行的,所使用的SDK版本为2.23.0。该工作从以下位置启动 外壳。

Composer的环境详细信息为

Image version = composer-1.10.3-airflow-1.10.3

Python version= 3

上一篇文章建议使用 PythonVirtualenvOperator 运算符。我使用设置尝试过此操作:

requirements=['apache-beam[gcp]'],

python_version=3

Composer返回错误"'install', 'apache-beam[gcp]']' returned non-zero exit status 2."

任何建议将不胜感激。

这是DAG调用数据流作业。我没有显示DAG中使用的所有功能,但是将导入保存在:

  import logging
    import pprint
    import json
    from airflow.operators.bash_operator import BashOperator
    from airflow.operators.python_operator import PythonOperator
    from airflow.contrib.operators.dataflow_operator import DataflowTemplateOperator
    from airflow.models import DAG
    import google.cloud.logging
    from datetime import timedelta
    from airflow.utils.dates import days_ago
    from deps import utils
    from google.cloud import storage
    from airflow.exceptions import AirflowException
    from deps import logger_montr
    from deps import dataflow_clean_csv
    
    
    
    dag = DAG(dag_id='clean_data_file',
              default_args=args,
              description='Runs Dataflow to clean csv files',
              schedule_interval=None)
    
    def get_values_from_previous_dag(**context):
        var_dict = {}
        for key, val in context['dag_run'].conf.items():
            context['ti'].xcom_push(key, val)
            var_dict[key] = val
    
    populate_ti_xcom = PythonOperator(
        task_id='get_values_from_previous_dag',
        python_callable=get_values_from_previous_dag,
        provide_context=True,
        dag=dag,
    )
    
    
    dataflow_clean_csv = PythonOperator(
        task_id = "dataflow_clean_csv",
        python_callable = dataflow_clean_csv.clean_csv_dataflow,
        op_kwargs= {
         'project': 
         'zone': 
         'region': 
         'stagingLocation':
         'inputDirectory': 
         'filename': 
         'outputDirectory':     
        },
        provide_context=True,
        dag=dag,
    )

populate_ti_xcom >> dataflow_clean_csv

我使用ti.xcom_pull(task_ids ='get_values_from_previous_dag')方法来分配op_kwargs。

这是正在被调用的数据流作业:

import apache_beam as beam
import csv
import logging
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.io import WriteToText


def parse_file(element):
  for line in csv.reader([element], quotechar='"', delimiter=',', quoting=csv.QUOTE_ALL):
      line = [s.replace('\"', '') for s in line]
      clean_line = '","'.join(line)
      final_line = '"'+ clean_line +'"'
      return final_line

def clean_csv_dataflow(**kwargs): 
    argv = [
           # Dataflow pipeline options 
           "--region={}".format(kwargs["region"]),
           "--project={}".format(kwargs["project"]) ,
           "--temp_location={}".format(kwargs["stagingLocation"]),
           # Setting Dataflow pipeline options  
           '--save_main_session',
           '--max_num_workers=8',
           '--autoscaling_algorithm=THROUGHPUT_BASED', 
           # Mandatory constants
           '--job_name=cleancsvdataflow',
           '--runner=DataflowRunner'     
          ]
    options = PipelineOptions(
      flags=argv
      )
      
    pipeline = beam.Pipeline(options=options)
    
    inputDirectory = kwargs["inputDirectory"]
    filename = kwargs["filename"]
    outputDirectory = kwargs["outputDirectory"]

    
    outputfile_temp = filename
    outputfile_temp = outputfile_temp.split(".")
    outputfile = "_CLEANED.".join(outputfile_temp)   

    in_path_and_filename = "{}{}".format(inputDirectory,filename)
    out_path_and_filename = "{}{}".format(outputDirectory,outputfile)
    
    pipeline = beam.Pipeline(options=options)
   

    clean_csv = (pipeline 
      | "Read input file" >> beam.io.ReadFromText(in_path_and_filename)
      | "Parse file" >> beam.Map(parse_file)
      | "writecsv" >> beam.io.WriteToText(out_path_and_filename,num_shards=1)
    )
   
    pipeline.run()

1 个答案:

答案 0 :(得分:1)

此答案由@BSpinoza在评论部分中提供:

我所做的是将所有imports从全局名称空间移到了地方 它们放入函数定义中。然后,从我使用的调用DAG BashOperator。它起作用了。

此外,推荐的方法之一是使用DataFlowPythonOperator