从Databricks访问AzureDataLake Gen2

时间:2019-07-16 08:52:45

标签: azure azure-data-lake azure-databricks

我想从Databricks的Azure DataLake Gen 2中读取数据。我同时具有读取和写入/执行权限的所有者角色。 DataLake允许从添加了Databricks的Vnet进行访问。我尝试了两种方法:Auth2和passthrough(首选方法)。

对于直通方法,我在Databricks笔记本中尝试了以下代码,并设置了所有先决条件(from this git):

# MAGIC 1. Azure Databricks Premium Plan.
# MAGIC 2. Azure Data Lake Storage Gen2: Databricks Runtime 5.3 or above.
# MAGIC 3. High concurrency clusters, which support only Python and SQL. [Enabled AD Passthrough checkbox under Advanced Options](https://docs.azuredatabricks.net/spark/latest/data-sources/azure/adls-passthrough.html#enable-passthrough-for-a-cluster)
# MAGIC 4. User needs to have [Storage Data Blob Owner/Contributor/Reader role](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-portal#rbac-roles-for-blobs-and-queues) OR [appropriate ACL permissions (R/W/E) on ADLA Gen2](https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-access-control#access-control-lists-on-files-and-directories) is granted

# COMMAND ----------

# Try to access file
# You'll need data.csv at root of container/filesystem
df = spark.read.csv("abfss://<STORAGE_CONTAINER>@<STORAGE_ACCOUNT>.dfs.core.windows.net/data.csv")
display(df)

错误出现:"abfss://<STORAGE_CONTAINER>@<STORAGE_ACCOUNT>.dfs.core.windows.net/data.csv" has invalid authority

对于Auth方法,我使用以下代码:

# Databricks notebook source
# MAGIC %md
# MAGIC ## Azure DataLake Gen2
# MAGIC 
# MAGIC Pre-requisites:
# MAGIC 1. [Create Service Principle](https://docs.microsoft.com/en-        us/azure/active-directory/develop/howto-create-service-principal-portal)
# MAGIC 1. Service Principle has [Storage Data Blob     Owner/Contributor/Reader role](https://docs.microsoft.com/en-    us/azure/storage/common/storage-auth-aad-rbac-portal#rbac-roles-for-blobs-and-    queues) OR [appropriate ACL permissions (R/W/E) on ADLA Gen2]    (https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-    access-control#access-control-lists-on-files-and-directories) is granted
# MAGIC 2. **Databricks Runtime 5.2** or above
# MAGIC 3. ADLS Gen2 storage account in the **same region** as your Azure     Databricks workspace

# COMMAND ----------

# Set spark configuration
spark.conf.set("fs.azure.account.auth.type", "OAuth")
spark.conf.set("fs.azure.account.oauth.provider.type",     "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
spark.conf.set("fs.azure.account.oauth2.client.id", "    <SERVICE_PRINCIPLE_CLIENT_ID>")
spark.conf.set("fs.azure.account.oauth2.client.secret", "    <SERVICE_PRINCIPLE_SECRET>")
spark.conf.set("fs.azure.account.oauth2.client.endpoint",     "https://login.microsoftonline.com/<DIRECTORY_TENANT_ID>/oauth2/token")

# COMMAND ----------

# Try to access file
# You'll need data.csv at root of container/filesystem
df =     spark.read.csv("abfss://<STORAGE_CONTAINER>@<STORAGE_ACCOUNT>.dfs.core.windows    .net/data.csv")
display(df)

错误是STORAGE_ACCOUNT.dfs.core.windows.net/STORAGE_CONTAINER//?action=getAccessControl&amp;timeout=90

这可能是什么问题?我拥有读/写/执行权限。

我可以正确识别STORAGE_ACCOUNT和STORAGE_CONTAINER吗?

我有以下文件系统(示例): enter image description here

例如,我想以共享的读写方式读取data.csv。 这段代码对此是否正确:

 df =     spark.read.csv("abfss://<STORAGE_CONTAINER>@<STORAGE_ACCOUNT>.dfs.core.windows    .net/data.csv")

->

 df =     spark.read.csv("abfss://shared-read-and-write@DATALAKE GEN2_NAME.dfs.core.windows.net/shared-read-and-write/data.csv")

1 个答案:

答案 0 :(得分:1)

首先在Databricks中创建一个安装位置,如下面提到的代码:

clientid="<clientId>"
credential="<clientSecret>"
storageAccountKey="<storageAccountKey>"
refreshurl="<url>"

configs = {"dfs.adls.oauth2.access.token.provider.type": "ClientCredential",
           "dfs.adls.oauth2.client.id": clientid,
           "dfs.adls.oauth2.credential": credential,
           "dfs.adls.oauth2.refresh.url": refreshurl}

dbutils.fs.mount(
  source = "adl://<accountname>.azuredatalakestore.net/<path of adls>",
  mount_point = "/mnt/adls",  extra_configs = configs)

默认情况下,它采用dbfs位置,即使您位于同一资源组中,adls也将不起作用 使用spark.read.csv(“ / mnt / adls /”) 希望对您有帮助