我正在尝试启动我用Intellij编写的Spark代码并在Databricks上运行它,因此我发现可以通过“ sbt-databricks” 插件来完成。
这是我的 build.sbt 文件:
name := "DatabricksTest"
version := "1.0"
scalaVersion := "2.11.8"
libraryDependencies ++= Seq("org.apache.spark" %% "spark-core" % "2.4.0" //% "provided"
,"org.apache.spark" %% "spark-sql" % "2.4.0" //% "provided",
)
dbcUsername := "token"//
dbcPassword := "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"//
dbcApiUrl := "https://westeurope.azuredatabricks.net/api/1.2"//
dbcClusters += "Cluster1"
这是我的 plugins.sbt 文件:
addSbtPlugin("com.databricks" %% "sbt-databricks" % "0.1.5")
当我尝试使用 dbcListClusters 列出集群时,出现以下错误
[trace] Stack trace suppressed: run 'last *:dbcFetchClusters' for the full output.
[error] (*:dbcFetchClusters) org.apache.http.client.HttpResponseException: Bad Request
[error] Total time: 3 s, completed 28 mars 2019 16:18:54
您能解决这个错误吗?
谢谢