Azure Blob错误:尽管使用wasbs方案,但正在访问的帐户不支持http

时间:2019-09-18 02:55:35

标签: azure azure-storage-blobs

我正在以以下方式访问azure blob:

spark.conf.set("fs.azure", "org.apache.hadoop.fs.azure.NativeAzureFileSystem")
spark.conf.set(s"fs.azure.account.key." + storageAccName + ".blob.core.windows.net", storageAccKey)
val container = "wasbs://" + storageAccContainerName + "@" + storageAccName + ".blob.core.windows.net/"

data.writeStream
      .option("checkpointLocation", "some/checkpoint/dir")
      .format("avro")
      .option("path",  container)
      .start()

我收到以下错误:

shaded.databricks.org.apache.hadoop.fs.azure.AzureException: com.microsoft.azure.storage.StorageException: The account being accessed does not support http.
at shaded.databricks.org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.retrieveMetadata(AzureNativeFileSystemStore.java:2084)
at shaded.databricks.org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:2124)
at org.apache.spark.sql.execution.streaming.FileSystemBasedCheckpointFileManager.exists(CheckpointFileManager.scala:255)
at com.databricks.spark.sql.streaming.DatabricksCheckpointFileManager.exists(DatabricksCheckpointFileManager.scala:80)
at org.apache.spark.sql.execution.streaming.HDFSMetadataLog.<init>(HDFSMetadataLog.scala:84)
at org.apache.spark.sql.execution.streaming.CompactibleFileStreamLog.<init>(CompactibleFileStreamLog.scala:48)
at org.apache.spark.sql.execution.streaming.CompactibleFileStreamLog.<init>(CompactibleFileStreamLog.scala:51)
at org.apache.spark.sql.execution.streaming.FileStreamSinkLog.<init>(FileStreamSinkLog.scala:85)
at org.apache.spark.sql.execution.streaming.FileStreamSink.<init>(FileStreamSink.scala:98)
at org.apache.spark.sql.execution.datasources.DataSource.createSink(DataSource.scala:315)
at org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:330)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: com.microsoft.azure.storage.StorageException: The account being accessed does not support http.
at com.microsoft.azure.storage.StorageException.translateFromHttpStatus(StorageException.java:175)
at com.microsoft.azure.storage.StorageException.translateException(StorageException.java:94)
at com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:305)
at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:178)
at com.microsoft.azure.storage.blob.CloudBlobContainer.downloadAttributes(CloudBlobContainer.java:570)
at shaded.databricks.org.apache.hadoop.fs.azure.StorageInterfaceImpl$CloudBlobContainerWrapperImpl.downloadAttributes(StorageInterfaceImpl.java:240)
at shaded.databricks.org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.checkContainer(AzureNativeFileSystemStore.java:1203)
at shaded.databricks.org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.retrieveMetadata(AzureNativeFileSystemStore.java:2006)

有什么方法可以通过spark writestream()写入Azure blob,同时还保持安全传输启用状态?

0 个答案:

没有答案