如何使用参数创建数据块作业

时间:2018-07-11 09:00:26

标签: python azure pyspark databricks azure-cli

我正在使用databricks-cli在databricks中创建新工作:

databricks jobs create --json-file ./deploy/databricks/config/job.config.json

具有以下json:

{
    "name": "Job Name",
    "new_cluster": {
        "spark_version": "4.1.x-scala2.11",
        "node_type_id": "Standard_D3_v2",
        "num_workers": 3,
        "spark_env_vars": {
            "PYSPARK_PYTHON": "/databricks/python3/bin/python3"
        }
    },
    "libraries": [
        {
            "maven": {
                "coordinates": "com.microsoft.sqlserver:mssql-jdbc:6.5.3.jre8-preview"
            }
        }
    ],
    "timeout_seconds": 3600,
    "max_retries": 3,
    "schedule": {
        "quartz_cron_expression": "0 0 22 ? * *",
        "timezone_id": "Israel"
    },
    "notebook_task": {
        "notebook_path": "/notebooks/python_notebook"
    }
}

我想添加一些参数,这些参数可以通过以下方式在笔记本中访问:

dbutils.widgets.text("argument1", "<default value>")
dbutils.widgets.get("argument1")

1 个答案:

答案 0 :(得分:1)

稍作调整后就找到了答案,您只需将notebook_task属性扩展为包含base_parameters即可,如下所示:

{
    "notebook_task": {
        "notebook_path": "/social/04_batch_trends",
        "base_parameters": {           
            "argument1": "value 1",
            "argument2": "value 2"
        }
    }
}