SparkSubmitOperator引发“无法找到dag_id"运行时出错

时间:2018-03-27 08:06:10

标签: apache-spark airflow

所以我有一个Spark工作从AWS中提取一些域,然后是三个不同的工作,每个工作都采用所述域并从站点中提取各种数据。由于某种原因,此工作流程在ImportS3CrawlData处停止,并显示以下错误:

[2018-03-22 13:37:02,762] {models.py:1428} INFO - Executing <Task(SparkSubmitOperator): ImportCrawlJob> on 2018-03-22 13:37:00
[2018-03-22 13:37:02,763] {base_task_runner.py:115} INFO - Running: ['bash', '-c', 'sudo -H -u hdfs airflow run dag_extract_jobs ImportCrawlJob 2018-03-22T13:37:00 --job_id 21 --raw -sd DAGS_FOLDER/run_extract_jobs.py --cfg_path /tmp/tmpir3e3r32']
[2018-03-22 13:37:04,194] {base_task_runner.py:98} INFO - Subtask: [2018-03-22 13:37:04,193] {__init__.py:45} INFO - Using executor SequentialExecutor
[2018-03-22 13:37:04,356] {base_task_runner.py:98} INFO - Subtask: [2018-03-22 13:37:04,356] {models.py:189} INFO - Filling up the DagBag from /home/airflow/airflow/dags/run_extract_jobs.py
[2018-03-22 13:37:04,451] {base_task_runner.py:98} INFO - Subtask: Traceback (most recent call last):
[2018-03-22 13:37:04,451] {base_task_runner.py:98} INFO - Subtask:   File "/usr/bin/airflow", line 27, in <module>
[2018-03-22 13:37:04,451] {base_task_runner.py:98} INFO - Subtask:     args.func(args)
[2018-03-22 13:37:04,452] {base_task_runner.py:98} INFO - Subtask:   File "/usr/lib/python3.5/site-packages/airflow/bin/cli.py", line 353, in run
[2018-03-22 13:37:04,452] {base_task_runner.py:98} INFO - Subtask:     dag = get_dag(args)
[2018-03-22 13:37:04,452] {base_task_runner.py:98} INFO - Subtask:   File "/usr/lib/python3.5/site-packages/airflow/bin/cli.py", line 130, in get_dag
[2018-03-22 13:37:04,452] {base_task_runner.py:98} INFO - Subtask:     'parse.'.format(args.dag_id))
[2018-03-22 13:37:04,452] {base_task_runner.py:98} INFO - Subtask: airflow.exceptions.AirflowException: dag_id could not be found: dag_extract_jobs. Either the dag did not exist or it failed to parse.

可以在下面找到run_extract_jobs.py的代码,删除敏感/不必要的位。

# Parameters to initialize Spark:
access_id = Variable.get("AWS_ACCESS_KEY")
bucket_name = 'cb-scrapinghub'
secret_key = Variable.get("AWS_SECRET_KEY")
timestamp = datetime.now().strftime("%Y-%m-%d-%H:%M:%S")


default_args = {
    'owner': 'airflow',
    'depends_on_past': False,
    'retries': 1,
    'retry_delay': timedelta(minutes=5),
}

DAG = DAG(
    dag_id='dag_extract_jobs',
    description='Run Extract Jobs',
    schedule_interval='@once',
    start_date=datetime(2018, 1, 1),
    catchup=False,
    default_args=default_args,
)

# Spark Job that runs ImportS3CrawlData:
importCrawlJob = SparkSubmitOperator(
    task_id='ImportCrawlJob',
    ...
    run_as_user='hdfs',
    dag=DAG,
)

# Spark Job that runs ExtractAboutText:
extractAboutText = SparkSubmitOperator(
    task_id='ExtractAboutText',
    ...
    run_as_user='hdfs',
    dag=DAG
)
extractAboutText.set_upstream(importCrawlJob)

# Spark Job that runs ExtractCompanyInfo:
extractCompanyInfo = SparkSubmitOperator(
    task_id='ExtractCompanyInfo',
    ...
    run_as_user='hdfs',
    dag=DAG
)
extractCompanyInfo.set_upstream(importCrawlJob)

# Spark Job that runs ExtractWebPeople:
extractWebPeople = SparkSubmitOperator(
    task_id='ExtractWebPeople',
    ...
    run_as_user='hdfs',
    dag=DAG
)
extractWebPeople.set_upstream(importCrawlJob)

我确保Airflow和Spark都是最新的。我的dag文件夹设置正确。 Airflow可以很好地运行教程文件。

我已经搞砸了好几天了,我感到非常困惑。提前感谢您的帮助。

1 个答案:

答案 0 :(得分:0)

您似乎没有正确设置配置参数。

确保完成https://airflow.apache.org/configuration.html

的第一部分

同样在airflow.cfg中,请确保已将dags_folder设置为文件路径。这样做时,您还可以检查是否需要设置其他设置和其他路径。