使用[此链接] [1]设置我的Databricks笔记本以连接到Azure SQL。我正在尝试在笔记本中运行SQL查询。但是得到错误
at com.databricks.spark.sqldw.Utils$.wrapExceptions(Utils.scala:258)
at
at linea51bced512544f18bbe612d4c624fde241.$read.<init>(command-3691084376529925:86)
at linea51bced512544f18bbe612d4c624fde241.$read$.<init>(command-3691084376529925:90)
at linea51bced512544f18bbe612d4c624fde241.$read$.<clinit>(command-3691084376529925)
at linea51bced512544f18bbe612d4c624fde241.$eval$.$print$lzycompute(<notebook>:7)
at linea51bced512544f18bbe612d4c624fde241.$eval$.$print(<notebook>:6)
at linea51bced512544f18bbe612d4c624fde241.$eval.$print(<notebook>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
输入以下内容后:
%scala
Class.forName("com.databricks.spark.sqldw.DefaultSource")
import org.apache.spark.sql.functions._ import org.apache.spark.sql.{DataFrame, SQLContext}
spark.conf.set(
"fs.azure.account.key.<your-storage-account-name>.blob.core.windows.net",
"<your-storage-account-access-key>")
// Load data from a SQL DW query
val df: DataFrame = spark.read
.format("com.databricks.spark.sqldw")
.option("url", "jdbc:sqlserver://xxxxx.database.windows.net:1433")
.option("tempDir", "wasbs://xxxxx@xxxxx.blob.core.windows.net/https:")
.option("forwardSparkAzureStorageCredentials", "true")
.option("query", "select * from xxxx.xxxx")
.load()
[1]: https://docs.databricks.com/spark/latest/data-sources/azure/sql-data-warehouse.html#