使用Azure DataBrick将数据帧写入Blob

时间:2020-03-12 17:41:59

标签: azure azure-storage azure-storage-blobs azure-databricks

是否存在任何链接或示例代码,我们可以在其中使用python(不使用pyspark模块)将数据帧写入到Azure Blob存储中。

1 个答案:

答案 0 :(得分:1)

下面是用于将CSV数据直接写入Azure Databricks Notebook中的Azure Blob存储容器的代码段。

# Configure blob storage account access key globally
spark.conf.set(
  "fs.azure.account.key.%s.blob.core.windows.net" % storage_name,
  sas_key)

output_container_path = "wasbs://%s@%s.blob.core.windows.net" % (output_container_name, storage_name)
output_blob_folder = "%s/wrangled_data_folder" % output_container_path

# write the dataframe as a single file to blob storage
(dataframe
 .coalesce(1)
 .write
 .mode("overwrite")
 .option("header", "true")
 .format("com.databricks.spark.csv")
 .save(output_blob_folder))

# Get the name of the wrangled-data CSV file that was just saved to Azure blob storage (it starts with 'part-')
files = dbutils.fs.ls(output_blob_folder)
output_file = [x for x in files if x.name.startswith("part-")]

# Move the wrangled-data CSV file from a sub-folder (wrangled_data_folder) to the root of the blob container
# While simultaneously changing the file name
dbutils.fs.mv(output_file[0].path, "%s/predict-transform-output.csv" % output_container_path)

示例:笔记本

enter image description here

输出:使用Azure Databricks将数据帧写入Blob存储

enter image description here